![]() image decoding devices, and, image decoding methods
专利摘要:
MOVING IMAGE ENCODING AND DECODING DEVICES, AND MOVING IMAGE ENCODING AND DECODING METHODS. When the coding mode corresponding to a coding block divided by a block dividing unit (2) is a direct mode inter-coding mode, a motion compensation prognosis unit (5) selects a motion vector appropriate to generate a prognostic image, from one or more selectable motion vectors, generates a prognostic image by executing a motion compensation prediction process for the coding block using the motion compensation vector for the coding block using the motion vector, and outputs index information indicating the motion vector for the variable length coding unit (13); and the variable length encoding unit (13) performs variable length encoding of the index information. 公开号:BR112013006499A2 申请号:R112013006499-4 申请日:2011-07-21 公开日:2020-08-04 发明作者:Yusuke Itani;Shunichi Sekiguchi;Kazuo Sugimoto 申请人:Mitsubishi Electronic Corporation; IPC主号:
专利说明:
“IMAGE DECODING DEVICES AND, IMAGE DECODING METHODS” FIELD OF THE INVENTION The present invention relates to a moving image encoding device, a moving image decoding device, a moving image encoding method, and a moving image decoding method which are used to an image compression encoding technology, a compressed image data transmission technology, etc. BACKGROUND OF THE INVENTION For example, in an international standard video encoding system, such as MPEG (Moving Image Expert Group) or "ITU-T H.26x", a method of defining block data (referred to as a "macro block" hereinafter) which is a combination of 16 × 16 pixels for a luminance signal and 8 × 8 pixels for each of the color difference signals that correspond to the 16 × 16 pixels of the luminance signal as a unit , and compressing image data on the basis of motion compensation technology and orthogonal transformation / transform coefficient quantization technology is used. In the motion compensation processes performed by a moving image encoding device and a moving image decoding device, a figure ahead or a figure behind is referred to, and detection of a motion vector and generation of an image prognosis are performed for each macro block. At this point, a figure for which only one figure is referred to and in which frame inter-prognosis coding is performed is referred to as a P figure, and a figure for which two figures are simultaneously referred to and in which coding of inter-prognosis of picture is performed is referred to as a figure B. In AVC / H.264 which is an international standard system (ISO / IEC 14496-10 ITU-T H.264), an encoding mode called a direct mode can be selected when encoding a figure B (for example, refer to the reference of nr1 non-patent). More specifically, a macro block 5 to be encoded has no encoded data from a motion vector, and an encoding mode in which to generate a motion vector from the macro block to be encoded can be selected in a predetermined arithmetic process using a motion vector of a macro block from another already coded figure and motion vector of an adjacent macro block. This direct mode includes the following two types of modes: a direct temporal mode and a spatial direct mode. In direct temporal mode, referring to the motion vector of another already encoded figure and then performing a scaling process to scale the motion vector according to the time difference between the other encoded figure and the figure that is the target to be coded, a motion vector of the macro block to be coded is generated. In direct spatial mode, referring to the motion vector of at least one macro block already encoded located in the vicinity of the macro block to be coded, a motion vector of the macro block to be coded is generated from the motion vector. In this direct mode, or the direct temporal mode and the spatial direct mode can be selected for each slice using "direct_spatial_mv_pred_flag" which is a flag arranged in each slice header. The way in which transform coefficients are not encoded, among direct modes, is referred to as a jump mode. From now on, a jump mode is also included in a direct mode which will be described below. Fig. 11 is a schematic diagram showing a method of generating a motion vector in direct temporal mode. In Fig. 11, "P" denotes a figure P and "B" denotes a figure B. In addition, numeric numbers 0 to 3 denote an order in which figures respectively designated by 0 to 3 denote an order in which figures respectively designated by numerical numbers are displayed, and show images that are displayed at times T0, T1, T2, and T3, respectively. It is assumed that a coding process in the figures is carried out in the manner of P0, P3, B1, and B2. 5 For example, a case in which an MB1 macro block in figure B2 is coded. In direct time mode it will be considered from now on. In this case, the motion vector MV of a macro block MB2 which is a motion vector from figure P3 closer to figure B2 among the already coded figures located in the backward direction with respect to figure B2 on the time axis, and which is spatially located in the same position as the MB1 macro block. This motion vector MV refers to figure P0, and motion vectors MVL0 and MVL1 that are used when coding the MB1 macro block are calculated according to the following equation (1). (1) Fig. 12 is a schematic diagram showing a method of generating a motion vector in direct spatial mode. In Fig. 12, currentMB denotes the macro block to be encoded. At this point, when the motion vector of a macro block A already encoded on the left side of the macro block to be encoded is expressed as MVa, the motion vector of a macro block B already encoded on an upper side of the macro block to be encoded encoded is expressed as MVb, and the motion vector of a macro block C already encoded in an upper right side of the macro block to be encoded is expressed as MVc, the motion vector MV of the macro block to be encoded can be calculated by determining the mean of these motion vectors MVa, MVb, and MVc, as shown in the following equation (2). MV = median (MVa, MVb, MVc) (2) The motion vector is determined for each of the figures forward and backward in direct spatial mode, and the motion vectors for both of them can be determined using the method mentioned above . The reference image that is used to generate a prognostic image is managed as a list of reference images for each vector that is used for reference. When two vectors are used, reference image lists are referred to as a list 0 and a list 1, respectively. reference images are stored in a list of reference images in reverse chronological order, respectively, and in a general case, list 0 shows a reference image ahead and a reference image ahead and list 1 shows an image reference point back. Alternatively, list 1 can show a reference image in the front and list 0 can show a reference image in the back, or each of lists 0 and 1 can show a reference image in the front and a reference image for back. In addition, the list of reference images does not have to be aligned in reverse chronological order. For example, the following non-patent reference # 1 describes that the list of reference images can be ordered for each slice. Related Art Document Non-patent reference Non-patent reference # 1: MPEG-4 AVC (ISO / IEC 14496-10) / H.ITU-T 264 standards SUMMARY OF THE INVENTION PROBLEMS TO BE SOLVED BY THE INVENTION Because the conventional image encoding device is constructed as above, the conventional image encoding device can switch between direct temporal and spatial direct mode on a per slice basis simply by referring to the "direct_spatial_mv_pred_flag" that is a flag arranged in each slice header. However, because the conventional image encoding device cannot switch between the temporal direct mode and the spatial direct mode on a macro block basis, even though an optimal direct mode for a macro block belonging to the slice is the spatial direct mode, for example For example, the conventional image encoding device has to use the direct temporal mode for the macro block when the direct mode corresponding to the slice is determined to be the direct temporal mode, and therefore cannot select the optimal direct mode. In such a case, because the conventional image encoding device cannot select the optimum direct mode, the conventional image encoding device has to encode an unnecessary motion vector and a problem of increasing the amount of code arises. The present invention is made in order to solve the problem mentioned above, and it is, therefore, an object of the present invention to provide a motion picture encoding device, a motion picture decoding device, a motion coding method. moving image, and a moving image decoding method capable of selecting an optimal direct mode for each predetermined block unit, and thereby being able to reduce the amount of code. MEANS TO SOLVE THE PROBLEM In accordance with the present invention, a moving image encoding device is provided including: an encoding control unit to determine a maximum size of an encoding block which is a unit to be processed when a process of prognosis is performed, and also determine a maximum hierarchical depth when a coding block having the maximum size is divided hierarchically, and to select an encoding mode that determines a coding method to encode each coding block from one or more available encoding modes; and a block dividing unit for dividing an image inserted into coding blocks having a predetermined size, and also dividing each of the mentioned blocks to be hierarchically encoded, in which when an inter-coding mode is a direct is selected by the coding control unit as a coding mode corresponding to one of the coding blocks in which the entered image is divided by the block dividing unit, the compensated motion prognosis unit selects a motion vector suitable for generation of a prognostic image from one or more selectable motion vectors and also performs a motion prediction process compensated in the block mentioned above to be coded to generate a prognostic image using the motion vector, and outputs index information showing the motion vector for a variable length coding unit, and the motion coding unit variable length encodes index information by variable length. ADVANTAGES OF THE INVENTION Because the moving image encoding device according to the present invention is built in such a way as that the moving image encoding device includes: the encoding control unit for determining a maximum size of a coding block which is a unit to be processed when a prognosis process is performed, and also determining a maximum hierarchical depth when the coding block having the maximum size is divided hierarchically, and to select a coding mode that determines a method of encoding to encode each encoding block from one or more available encoding modes; and the block splitting unit for dividing an image inserted into encoding blocks having a predetermined size, and also dividing each of the blocks mentioned above to be hierarchically encoded and, when an inter-encoding mode which is a direct is selected by the coding control unit as a coding mode corresponding to one of the coding blocks in which the entered image is divided by the block dividing unit, the compensated motion prognosis unit selects a motion vector suitable for generation of a prognostic image from one or more selectable motion vectors and also performs a motion prediction process compensated in the block mentioned above to be coded to generate a prognostic image using the motion vector, and outputs index information showing the motion vector for the variable length coding unit, and the motion coding unit variable length encodes index information by variable length, an advantage is provided of being able to select an optimum direct mode for each predetermined block unit, and reduces the amount of code. BRIEF DESCRIPTION OF THE FIGURES [Fig. 1] Fig. 1 is a block diagram showing a moving image encoding device according to Mode 1 of the present invention; [Fig. 2] Fig. 2 is a block diagram showing a compensated motion prognosis part 5 of the moving image encoding device according to Mode 1 of the present invention; [Fig. 3] Fig. 3 is a block diagram showing a direct vector generation part 23 that builds the compensated motion prognosis part 5; [Fig. 4] Fig. 4 is a block diagram showing the direct vector determination part 33 that builds the direct vector generation part 23; [Fig. 5] Fig. 5 is a block diagram showing a moving image decoding device according to Mode 1 of the present invention; [Fig. 6] Fig. 6 is a block diagram showing a compensated motion prognosis part 54 of the moving image decoding device according to Mode 1 of the present invention; [Fig. 7] Fig. 7 is a flow chart showing processing performed by the moving image encoding device according to Mode 1 of the present invention; [Fig. 8] Fig. 8 is a flow chart showing processing performed by the moving image decoding device according to Mode 1 of the present invention; [Fig. 9] Fig. 9 is an explanatory drawing showing a state in which each coding block having a maximum size is hierarchically divided into a large number of coding blocks; [Fig. 10] Fig. 10 (a) is an explanatory drawing showing a distribution of partitions into which a block to encode is divided, and Fig. 10 (b) is an explanatory drawing showing a state in which an encoding mode m (Bn) it is assigned to each of the partitions after dividing the hierarchical layer using a tree graph; [Fig. 11] Fig. 11 is a schematic diagram showing a method of generating a motion vector in a direct temporal mode; [Fig. 12] Fig. 12 is a schematic diagram showing a method of generating a motion vector in a direct spatial mode; [Fig. 13] Fig. 13 is a schematic diagram showing a method of generating a spatial director vector from candidates A1 to An, B1 to Bn, C, D, and E for average prognosis; [Fig. 14] Fig. 14 is a schematic diagram showing a method of generating a direct spatial vector by scaling according to the distance in a temporal direction; [Fig. 15] Fig. 15 is an explanatory drawing showing an example of calculating an assessed value based on the degree of similarity between a forward-to-forward prognostic image and a backward prognostic image; [Fig. 16] Fig. 16 is an explanatory drawing showing an evaluation equation using a variance of motion vectors; [Fig. 17] Fig. 17 is an explanatory drawing showing the spatial vectors MV_A, MV_B, and MV_C, and temporal vectors MV_1 to MV_8; [Fig. 18] Fig. 18 is an explanatory drawing showing the generation of a candidate vector from a large number of vectors already encoded; [Fig. 19] Fig. 19 is an explanatory drawing showing an example of calculating an SAD assessed value from a combination of only images located backwards in time; [Fig. 20] Fig. 20 is an explanatory drawing showing the search for an image similar to an L-shaped template; [Fig. 21] Fig. 21 is an explanatory drawing showing an example in which the size of a Bn coding block is Ln = kMn; [Fig. 22] Fig. 22 is an explanatory drawing showing an example of a division satisfying (Ln + 1, Mn + 1) = (Ln / 2, Mn / 2); [Fig. 23] Fig. 23 is an explanatory drawing showing an example in which the division shown in, or Fig. 21 or Fig. 22 can be selected; [Fig. 24] Fig. 24 is an explanatory drawing showing an example in which a block size transformation unit has a hierarchical structure; [Fig. 25] Fig. 25 is a block diagram showing a compensated motion prognosis part 5 of a motion picture encoding device according to Mode 3 of the present invention; [Fig. 26] Fig. 26 is a block diagram showing a direct vector generation part 25 that builds the compensated motion prognosis part 5; [Fig. 27] Fig. 27 is a block diagram showing an initial vector generation part 34 that builds the direct vector generation part 25; [Fig. 28] Fig. 28 is a block diagram showing an initial vector determination part 73 that builds the initial vector generation part 34. [Fig. 29] Fig. 29 is a block diagram showing a compensated motion prognosis part 54 of a moving image decoding device according to Mode 3 of the present invention; [Fig. 30] Fig. 30 is an explanatory drawing showing a process of searching for a motion vector; [Fig. 31] Fig. 31 is a block diagram showing a compensated motion prognosis part 5 of a moving image encoding device according to Mode 4 of the present invention; [Fig. 32] Fig. 32 is a block diagram showing a compensated motion prognosis part 54 of a moving image decoding device according to Mode 4 of the present invention; [Fig. 33] Fig. 33 is an explanatory drawing showing a candidate direct vector index in which a selectable motion vector and index information showing the motion vector are described; 5 [Fig. 34] Fig. 34 is an explanatory drawing showing an example of encoding only index information showing a vector; [Fig. 35] Fig. 35 is a block diagram showing a direct vector generation part 26 that builds the compensated motion prognosis part 5; [Fig. 36] Fig. 36 is a block diagram showing a compensated motion prognosis part 5 of a moving image encoding device according to Mode 5 of the present invention; [Fig. 37] Fig. 37 is a block diagram showing a direct vector generation part 27 that builds the compensated motion prognosis part 5; [Fig. 38] Fig. 38 is a block diagram showing a compensated motion prognosis part 54 of a moving image decoding device according to Mode 5 of the present invention; [Fig. 39] Fig. 39 is a block diagram showing a direct vector generation part 26 that builds the compensated motion prognosis part 5; [Fig. 40] Fig. 40 is an explanatory drawing showing a correlation with an adjacent block; [Fig. 41] Fig. 41 is an explanatory drawing of a list showing one or more selectable motion vectors for each of the block sizes provided for coding blocks; [Fig. 42] Fig. 42 is an explanatory drawing showing an example of a list whose maximum block size is "128"; [Fig. 43] Fig. 43 is an explanatory drawing of a list showing one or more selectable motion vectors for each of the division patterns provided for coding blocks; 5 [Fig. 44] Fig. 44 is a flow chart showing a process of transmitting list information on a moving image encoding device; [Fig. 45] Fig. 45 is a flowchart showing a process of receiving list information on a moving image decoding device; [Fig. 46] Fig. 46 is an explanatory drawing showing an example of coding a change flag set to "ON" and list information showing the list changed because "temporal" in a list is changed from being selectable to not being able to be selected; [Fig. 47] Fig. 47 is an explanatory drawing showing an example of changing a list currently being maintained because a change flag is set to “ON"; [Fig. 48] Fig. 48 is an explanatory drawing showing an example of preparation a change flag for each block size, and encoding only list information associated with a block size for which selectable motion vectors are changed, and [Fig. 49] Fig. 49 is an explanatory drawing showing a example of searching for a block that is inter-coded from a target block, and configuring all the vectors included in the block with candidate spatial vectors. MODALITIES OF THE INVENTION Hereinafter, the preferred embodiments of the present invention will be explained in detail with reference to the drawings. Modality 1. In this modality 1, a moving image encoding device that enters each frame image in a video, performs 5 variable length encoding in a frame image after performing a compression process with an orthogonal transformation and quantization in a prognostic difference signal that a moving image encoding device acquires by performing a compensated motion prognosis between adjacent frames to generate a bit sequence, and a moving image decoding device that decodes the bit sequence emitted from the moving image encoding device will be explained. Moving image encoding device according to this mode 1 is characterized in that the moving image encoding device adapts to a local change of a video signal in the spatial and temporal directions to divide the video signal into regions of various sizes, and performs adaptive intra-frame and inter-frame coding. In general, a video signal has a characteristic of its complexity varying locally in space and time. There may be a case where a pattern having a uniform signal characteristic over a relatively large image area, such as a sky image or a wall image, or a pattern having a complicated texture pattern over a small image area, such as a person image or a figure including a fine texture, it also co-exists in a given video frame from a spatial point of view. Also from a temporal point of view, a relatively large image area, such as a sky image or a wall image, has a small local change in a temporal direction in its pattern, while an image of a person the object in movement has a greater temporal change because its outline has a movement of a rigid body and a movement of a non-rigid body with respect to time. Although in the coding process a process of generating the signal of difference in prognosis having small signal power and small entropy using temporal and spatial prognosis, and thereby reducing the amount of total code, the amount of a parameter used for the prognosis can be reduced while the parameter can be applied evenly with as large an image signal region as possible. On the other hand, because the amount of errors occurring in the prognosis increases when the same prognosis parameter is applied to an image signal pattern having a large change in time and space, the amount of code of the prognosis difference signal cannot be reduced. It is therefore desirable to reduce the size of a region that is subjected to the prognosis process when carrying out the prognosis process in an image signal pattern having a major change in time and space, and thereby reducing power and entropy. of the prognosis difference signal even though the volume of parameter data that is used for the prognosis is increased. In order to carry out encoding that is adapted for such typical characteristics of a video signal, the moving image encoding device according to this mode 1 hierarchically divides each region having a predetermined maximum block size of the video signal in blocks, and performs the prognosis process and the coding process to encode the difference in prognosis in each of the blocks into which each region is divided. A video signal that is to be processed by the moving image encoding device according to this mode 1 can be an arbitrary video signal in which each video frame consists of a series of digital samples (pixels) in two dimensions, horizontal and vertical, such as a YUV signal consisting of a luminance signal and two color difference signals, a color image video signal in arbitrary color space, such as an RGB signal, emitted from a digital image sensor, a monochrome image signal, or an infrared image signal. The graduation of each pixel can be an 8-bit, 10-bit, or 12-bit. In the following explanation, the video signal entered is a YUV signal unless otherwise specified. It is further assumed that the two color difference components U and V are signals having a 4: 2: 0 format that are sub-sampled with respect to the Y luminance component. The data unit to be processed that corresponds to each frame of the video signal is referred to as a "figure." In this mode 1, a "figure" is explained as a video signal frame in which a progressive scan is performed. When the video signal is an interlaced signal, a "picture" can alternatively be a field image signal that is a unit that builds a video frame. Fig. 1 is a block diagram showing a moving image encoding device according to Mode 1 of the present invention. Referring to Fig 1, a coding control part 1 performs a process of determining a maximum size for each of the coding blocks which is a unit to be processed at a time when a compensated motion prediction process (inter prognostic process framework) or an intra-prognosis process (intra-framework prognosis process) is performed, and also determine an upper limit on the number of hierarchical layers, ie, a maximum hierarchical depth in a hierarchy in which each of the coding blocks having the maximum size it is hierarchically divided into blocks. The coding control part 1 also performs a process of selecting a suitable coding mode for each of the coding blocks in which each coding block having a maximum size is divided hierarchically among one or more available coding modes (one or more intra-coding modes and one or more inter-coding modes (including an inter-coding mode which is a direct mode)). The coding control part 1 builds a coding control unit. The block dividing part 2 performs a process of, when receiving a video signal showing an inserted image, dividing the introduced image shown by the video signal into coding blocks each having the maximum size determined by the coding control part 1 , and also divide each of the coding blocks into blocks hierarchically until the number of hierarchical layers reaches the upper limit on the number of hierarchical layers that is determined by the coding control part 1. Block division part 2 builds a unit block division. A selection switch 3 performs a process when the coding mode selected by the coding control part 1 for the coding block, which is generated by dividing by the block dividing part 2, is an intra coding mode , send the coding block to an intraprognostic part 4, and, when the coding mode selected by the coding control part 1 for the coding block, which is generated by dividing by the block dividing part 2, is a mode of inter-coding, emitting the coding block for the prognostic part of compensated movement 5. The intra-prognostic part 4 performs a process of, when receiving the coding block, which is generated through the division by the part block division 2, from the selection switch 3, perform an intra-prognosis process in the coding block using intra-prognosis parameters emitted from the coding control part 1 to generate an image prognosis. An intraprognostic unit is comprised of the selection switch 3 and the intraprognostic part 4. The compensated motion prognostic part 5 performs a process when an inter-coding mode which is a direct mode is selected by coding control part 1 as the appropriate coding mode for the coding block, which is generated by dividing by the block dividing part 2, to generate a direct spatial vector in a spatial direct mode from the motion vector of an already encoded block located in the vicinity of the coding block and also generate a direct temporal vector in a direct temporal mode from the motion vector of an already encoded figure that can be referred to by the coding block, select a direct vector that provides a greater correlation between reference images of the direct spatial vector and the direct temporal vector, and carry out a compensated motion prognosis process in the coding block using the selected direct vector and thereby generate a prognostic image. Conversely, when an inter-coding mode other than a direct mode is selected by the coding control part 1 as the appropriate coding mode for the coding block, which is generated by dividing by the block dividing part 2 , the compensated movement prognosis part 5 performs a search process through the coding block and the reference image stored in a compensated movement prognosis frame memory 12 for a movement vector, and perform a movement prognosis process compensated in the coding block using the motion vector to generate a prognostic image. The compensated movement prognosis unit is comprised of the selection switch 3 and the compensated movement prognosis part 5. The subtraction part 6 performs a process of subtracting the prognosis image generated by the intraprognosis part 4 or the prognosis of compensated movement 5 of the coding block, which is generated by dividing by the dividing part of block 2, to generate a difference image (= the coding block - the prognosis image). Subtraction part 6 builds a difference imaging unit. The transformation / quantization part 7 performs a process of 5 performing an orthogonal transformation process (eg, a DCT (discrete cosine transform) or an orthogonal transformation process, such as a KL transform, in which bases are assigned to a specific learning sequence in advance) in the difference signal generated by minus part 6 in units of a block having a transformation block size included in the prognostic difference coding parameters emitted from the coding control part 1, and also to quantize the transform coefficients of a difference image using a quantization parameter included in the prognosis difference coding parameters to output the transform coefficients from which quantization was applied and thereby as compressed data from a difference image. The transformation / quantization part 7 builds an image compression unit. A reverse quantization / reverse transformation part 8 performs a process of doing reverse quantization on the compressed data emitted from the transformation / quantization part 7 using the quantization parameter included in the prognosis difference coding parameter emitted from the control part encoding 1, and perform an inverse transformation process (eg, an inverse DCT (inverse discrete cosine transform) or an inverse transformation process such as an inverse KL transform) on the compressed data to which quantization was applied reverse and thereby output the compressed data in which the reverse quantization / reverse transformation part performs the reverse transformation process as a sign of local decoded prognostic difference. An addition part 9 performs a process of adding the local decoded prognosis difference signal emitted from the reverse quantization / reverse transformation part 8 and the prognosis signal 5 showing the prognosis image generated by the intra-prognosis part 4 or the compensated motion prognosis part 5 to generate a local decoded image signal showing a local decoded image. An intraprognosis memory 10 is a recording medium, such as a RAM, to store the local decoded image shown by the local decoded image signal generated by the addition part 9 as an image that the intraprognosis part 4 will use when performing the intra-prognosis process next time. The loop filter part 11 performs a process of compensating for an encoding distortion included in the local decoded image signal generated by the addition part 9, and outputting the local decoded image shown by the local decoded image signal in which the filter part in loop performs the compensation of the coding distortion compensation for a compensated motion prognosis frame memory 12 as a reference image. The motion compensated prognosis frame memory 12 is a recording medium, such as a RAM, for storing the local decoded image in which the loop filter part 11 performs the filtration process as a reference image than the reference part. 5 compensated motion prediction will use when performing the compensated motion prediction process the next time. The variable length coding part 13 performs a process of coding by variable length the compressed data emitted from the transformation / quantization part 7, the coding mode and the prognosis difference coding parameters that are emitted from the part coding control 1, and the intraprognosis parameters emitted from the intraprognosis part 4 or interprognosis parameters emitted from the compensated motion prognostic part 5 to generate a bit sequence in which encoded data of the compressed data, coded data of the coding mode, coded data of the prognosis difference coding parameters, and coded data of the intraprognosis parameters or the interprognosis parameters are multiplexed. The variable length coding part 13 builds the variable length coding unit. Fig. 2 is a block diagram showing the compensated motion prognosis part 5 of the moving image encoding device according to Mode 1 of the present invention. Referring to Fig 2, a selection switch 21 performs a process of outputting the coding block, which is generated by dividing by the block dividing part 2, to a motion vector search part 22 when the coding mode selected by the coding control part 1 is an inter-mode other than direct modes, and outputting the coding block, which is generated by dividing by the block dividing part 2, to the direct vector generation part 23 when the encoding mode is an inter-mode which is a direct mode. Because the direct vector generation part 23 does not use the coding block, which is generated by dividing by the block dividing part 2, when generating a direct vector, the selection switch does not have to output the coding block to the direct vector generation part 23. The motion vector search part 22 performs a process of searching for an optimal motion vector in the inter-mode while referring to both the coding block emitted from the selection switch 21 and a reference image stored in the offset motion prediction board memory 12, and outputting the motion vector to a motion compensation processing part 24. The direct vector generation part 23 performs a process of generating a spatial direct vector in the spatial direct mode from the motion vector of an already encoded block located in the vicinity of the coding block, and also generate a direct temporal vector in direct temporal mode 5 from the ve motion of an already encoded figure that can be referred to by the coding block, and select a direct vector that provides a greater correlation between reference images from the direct spatial vector and the direct temporal vector. The motion compensation processing part 24 performs a process of carrying out a compensated motion prognosis process on the basis of the inter-prognosis parameters emitted from the coding control part 1 using both the motion vector that is searched by the part motion vector search 22 or the direct vector that is selected by the direct vector generation part 23, and one or more frames of reference images stored in the compensated motion prognosis frame memory 12 to generate a prognosis image. The motion compensation processing part 24 issues the inter-prognosis parameters when the motion compensation processing part uses when performing the compensated motion prognosis process for the variable length coding part 13. When the coding mode selected by the coding control part 1 is an inter-mode other than direct modes, the motion compensation processing part includes the motion vector which is searched by the motion vector search part 22 in the inter-prognosis parameters , and outputs these inter-prognosis parameters for the variable length coding part 13. Fig. 3 is a block diagram showing the direct vector generation part 23 that builds the compensated motion prognosis part 5. Referring to Fig 3, the space direct vector generation part 31 performs a process of reading the motion vector of an already encoded block located in the vicinity of the coding block among the motion vectors of already encoded blocks (the already encoded block motion vectors are stored in a non-motion vector memory. 5 shown or an internal memory of a compensated motion prognosis part 5) to generate a spatial direct vector in a spatial direct mode from the motion vector. A time direct vector generation part 32 performs a process of reading the motion vector of a block spatially located in the same position as the coding block, which is the motion vector of an already encoded figure that can be referred to by the block of coding, among the block motion vectors already coded to generate a direct temporal vector in direct temporal mode from the motion vector. The direct vector determination part 33 performs a process of calculating a value evaluated in the direct spatial mode using the direct spatial vector generated by the generation of the spatial direct vector 31 and also calculating a value evaluated in the direct temporal mode using the direct temporal vector generated by the direct temporal vector generation part 32, and to compare the value evaluated in the direct spatial mode with the value evaluated in the direct temporal mode to select both the direct spatial vector and the direct temporal vector. Fig. 4 is a block diagram showing the direct vector determination part 33 that builds the direct vector generation part 23. Referring to Fig 4, a motion compensation part 41 performs a process of generating an image list 0 prognostic in direct spatial mode (eg, a forward prognostic image in direct spatial mode) and a list 1 of prognostic image in direct spatial mode (eg, a backward prognostic image in direct spatial mode) using the spatial direct vector generated by the spatial direct vector generation part 31, and also generate a prognostic image 0 list in the direct temporal mode (eg, a forward prognostic image in the direct temporal mode) and a list 1 of the prognostic image in the direct temporal mode (eg, a prognostic image backwards in the direct temporal mode 5) using the direct temporal vector generated by the generation of the direct temporal vector 32 part. A similarity calculation part 42 performed makes a process of calculating the degree of similarity between the prognostic image list 0 in the direct spatial mode (prognostic image forward) and the prognostic image list 1 in the direct spatial mode (prognostic image backwards) as the value evaluated in the direct spatial mode, and also to calculate the degree of similarity between the prognostic image list 0 in the direct temporal mode (forward prognostic image) and the prognostic image list 1 in the direct temporal mode (back prognostic image) ) as the value evaluated in the direct temporal mode. The direct vector selection part 43 performs a process of comparing the degree of similarity between the prognostic image list 0 in the direct spatial mode (forward prognostic image) and the prognostic image list 1 in the direct spatial mode (image prognosis backwards), which is calculated by the similarity calculation part 42, with the degree of similarity between the prognostic image list 0 in the direct temporal mode (forward prognosis image) and the prognosis image list 1 in the direct temporal mode (prognosis image backwards), which is calculated by the similarity calculation part 42, to select the direct vector in a direct mode that provides a greater degree of similarity among the prognosis image list 0 (prognosis image forward) and list 1 of prognosis image (prognosis image backwards) from the direct spatial vector and the direct temporal vector. Fig. 5 is a block diagram showing a moving image decoding device according to Mode 1 of the present invention. Referring to Fig 5, the variable length decoding part 51 performs a process of decoding the multiplexed encoded data in the bit stream by variable length to acquire the compressed data, the encoding mode, the prognosis difference encoding parameters, and the intra-prognosis parameters or the inter-prognosis parameters, which are associated with each coding block in which each frame of the video is hierarchically divided, and output the compressed data and the prognosis difference coding parameters to a part reverse quantization / reverse transformation 55, and also output the encoding mode, and the intraprognosis parameters or the interprognosis parameters for a selection switch 52. The variable length decoding part 51 builds a decoding unit of variable length. The selection switch 52 performs a process, when the coding mode associated with the coding block, which is emitted from the variable length decoding part 51, is an intra-coding mode, outputting the intra-coding parameters. prognosis emitted in it from the variable length decoding part 51 to an intraprognosis part 53, and, when the coding mode is an inter-coding mode, output the inter-prognostic parameters emitted in it from the part of variable length decoding 51 for a compensated motion prognosis part 54. The intraprognosis part 53 performs a process of performing an intraprognosis process on the coding block using the intraprognosis parameters emitted in it from the selection switch 52 to generate a prognostic image. An intraprognostic unit is comprised of the selection switch 52 and the intraprognostic part 53. The compensated motion prognostic part 54 performs a process when the coding mode associated with the coding block is output it from the variable length decoding part 51, it is an inter-coding mode that is a direct mode, generating a spatial direct vector in the spatial direct mode from the motion vector 5 of an already decoded block located in the vicinity of the coding block and also generating a direct temporal vector in the direct temporal mode from the motion vector of an already decoded figure that can be referred to by the coding block, selecting a direct vector that provides a greater correlation between reference images from the direct spatial vector and the direct temporal vector, and carry out a compensated motion prognosis process in the coding block using the selected direct vector and by in the meantime, generate a prognostic image. The compensated motion prediction part 54 also performs a process of effecting a compensated motion prediction process on the coding block using the motion vector included in the interprognosis parameters emitted therein from the variable length decoding part 51 for generating a prognostic image when the encoding mode associated with the encoding block, which is emitted therein from the variable length decoding part 51, is an inter-encoding mode other than direct modes. The compensated motion prognosis unit is comprised of the selection switch 52 and the compensated motion prognosis part 54. A reverse quantization / reverse transformation part 55 performs a process of doing reverse quantization of the compressed data associated with the coding block, which is emitted in it from the variable length decoding part 51, using the quantization parameter included in the prognosis difference encoding parameters emitted in it from the variable length decoding part 51, and perform a reverse transformation process ( eg, an inverse DCT (inverse discrete cosine transform) or an inverse transformation process such as an inverse KL transform) in the compressed data to which inverse quantization was applied and thereby in 5 units of a block having the transformation block size included in the prognosis difference coding parameters, and issue compressed data in which the reverse quantization / reverse transformation part performs the reverse transformation process as a decoded prognostic difference signal (signal showing a pre-compressed difference image). A reverse quantization / reverse transformation part 55 builds a difference imaging unit. An addition part 56 performs a process of adding the decoded prognosis difference signal emitted thereon from the reverse quantization / reverse transformation part 55 and the prognosis signal showing the prognosis image generated by the intraprognosis part 53 or the compensated motion prognosis part 54 to generate a decoded image signal showing a decoded image. Addition part 56 builds a decoded imaging unit. An intraprognosis memory 57 is a recording medium, such as a RAM, for storing a decoded image shown by the decoded image signal generated by the addition part 56 as an image that the intraprognosis part 53 will use when performing the intraprognostic process next time. The loop filter part 58 performs a process of compensating for an encoding distortion included in the decoded image signal generated by the addition part 56, and outputting the decoded image shown by the decoded image signal in which the loop filter part performs the coding distortion compensation for a compensated motion prediction board memory 59 as a reference image. The motion compensated prognostic frame memory 59 is a recording medium, such as a RAM, for storing the decoded image in which the loop filter part 58 performs the filtration process as a reference image than the prognostic part motion compensated 54 will use when performing the motion compensated prognosis process the next time. Fig. 6 is a block diagram showing the compensated motion prognosis part 54 of the moving image decoding device according to Mode 1 of the present invention. Referring to Fig 6, a selection switch 61 performs a process when the coding mode associated with the coding block, which is emitted there from the variable length decoding part 51, is an inter-mode other than direct modes, outputting the inter-prognosis parameters (including the motion vector) emitted in it from the variable length decoding part 51 to a motion compensation processing part 63, and, when the encoding mode is an inter -mode which is a direct mode, outputting the inter-prognosis parameters emitted from it from the variable length decoding part 51 to a direct vector generation part 62. The direct vector generation part 62 performs a process of generating a direct spatial vector in direct spatial mode from the motion vector of an already decoded block located in the vicinity of the coding block and also generate a direct temporal vector in direct temporal mode from the motion vector of an already decoded figure that can be referred to by the coding block, and select a direct vector that provides a greater correlation between reference images from the direct spatial vector and the direct temporal vector. The direct vector generation part 62 also performs a process of outputting the inter-prognosis parameters emitted thereon from the selection switch 61 to the motion compensation processing part 63. The internal structure of the direct vector generation part 62 is the same as the direct vector generation part 23 shown in Fig. 2. The motion compensation processing part 63 5 performs a process of carrying out a compensated motion prediction process based on the emitted interprognosis parameters on it from the direct vector generation part 62 using both the motion vector included in the inter-prognosis parameters emitted on it from the selection switch 61 or the direct vector selected by the direct vector generation part 62, and an image of a frame stored in the compensated motion prognosis frame memory 59 to generate a prognosis image. In the example in Fig. 1, the coding control part 1, the block division part 2, the selection switch 3, the intraprognosis part 4, the compensated motion prognosis part 5, the subtraction part 6, the transformation / quantization part 7, the reverse quantization / reverse transformation part 8, the addition part 9, the loop filter part 11, and the variable length coding part 13, which are the components of the device moving image encoding, may consist of pieces of hardware for exclusive use (eg, integrated circuit on each of which a CPU is mounted, microcomputers on a chip, or the like), respectively. As an alternative, the moving image encoding device may consist of a computer, and a program in which the processes performed by the coding control part 1, the block division part 2, the selection switch 3, the part intra-prognosis 4, by the compensated movement prognosis part 5, by the subtraction part 6, by the transformation / quantization part 7, by the reverse quantization / reverse transformation part 8, by the addition part 9, by the filter part in loop 11, and the variable length encoding part 13 are described can be stored in a computer memory and the computer CPU can be made to run the program stored in memory. Fig. 7 is a flow chart showing the processing performed by the moving image encoding device according to Mode 1 of the present invention. In the example in Fig. 5, the variable length decoding part 51, the selection switch 52, the intraprognosis part 53, the compensated motion prognosis part 54, the reverse quantization / reverse transformation part 55, the addition part 56, and loop filter part 58, which are the components of the moving image decoding device, may consist of pieces of hardware for exclusive use (eg, integrated circuits in each of which a CPU is assembled, microcomputers on a chip, or the like), respectively. As an alternative, the moving image decoding device may consist of a computer, and a program in which the processes performed by the variable length decoding part 51, by the selection switch 52, by the intra-prognostic part 53, by compensated motion prognosis part 54, by the reverse quantization / reverse transformation part 55, by the addition part 56, and by the loop filter part 58 are described can be stored in a computer memory and the computer CPU can be made to run the program stored in memory. Fig. 8 is a flowchart showing the processing performed by the moving image decoding device according to Mode 1 of the present invention. In the following, the operation of the moving image encoding device and the operation of the moving image decoding device will be explained. First, the processing performed by the moving image encoding device shown in Fig. 1 will be explained. First, the coding control part 1 determines a maximum size of each of the coding blocks which is a unit to be processed at a time when a compensated motion prediction process (interframe prediction process) or a intra-prognosis (intra-frame prognosis process) is performed, and also determines an upper limit on the number of hierarchical layers in a hierarchy in which each of the coding blocks having the maximum size is hierarchically divided into blocks (step ST1 of Fig. 7). As a method of determining the maximum size of each of the coding blocks, for example, it is considered a method of determining a maximum size for all figures according to the resolution of the image entered. In addition, it can be considered a method of quantifying a variation in the complexity of a local movement of the introduced image as a parameter and then determining a small size for the figure having the large and vigorous movement while determining a large size for the figure having a small movement. As a method of determining the upper limit on the number of hierarchical layers, for example, it can be considered a method of increasing the depth of the hierarchy, ie, the number of hierarchical layers to make it possible to detect a finer movement as the introduced image has a greater and more vigorous movement, or decreasing the depth of the hierarchy, ie, the number of hierarchical layers according to the image introduced has less movement. The coding control part 1 also selects a suitable coding mode for each of the coding blocks in which each coding block having the maximum size is divided hierarchically among one or more available coding modes (M intra-coding modes and N inter-coding modes (including an inter-coding mode which is a direct mode)) (step ST2). Although a detailed explanation of the selection method of selecting a coding mode for use in coding control part 1 is omitted because the selection method is a known technique, there is a method of carrying out a coding process on the coding block using a 5 arbitrary available coding mode to examine the coding efficiency and select a coding mode having the highest level of coding efficiency among a large number of coding modes available, for example. When receiving the video signal showing the entered image, the block dividing part 2 divides the introduced image shown by the video signal into encoding blocks each having the maximum size determined by the coding control part 1, and also dividing each one of the coding blocks in hierarchical blocks until the number of hierarchical layers reaches the upper limit in the number of hierarchical layers which is determined by the coding control part 1. Fig. 9 is an explanatory drawing showing a state in which each block of encoding having the maximum size is hierarchically divided into a large number of encoding blocks. In the example in Fig. 9, each coding block having the maximum size is a B0 coding block in the 0 th hierarchical layer, and its luminance component has a size of (L0, M0). In addition, in the example of Fig. 9, performing the hierarchical division with this coding block B0 having the maximum size being configured as a starting point until the depth of the hierarchy reaches a predetermined depth that is configured separately according to a tree structure, Bn coding blocks can be purchased. At the depth of n, each Bn coding block is an image area having a size of (Ln, Mn). In this example, although Mn can be the same as or different from Ln, the case of Ln = Mn is shown in Fig. 4. Hereinafter, the size of each Bn coding block is defined as the size of (Ln, Mn) in the luminance component of the Bn coding block. Because the division part of block 2 divides into 5 trees, (Ln + 1, Mn + 1) = (Ln / 2, Mn / 2) is always established. In the case of a color image video signal (4: 4: 4 aspect ratio) in which all color components have the same sample number, such as an RGB signal, all color components have a size of (Ln, Mn), while in the case of handling a 4: 2: 0 format, a corresponding color difference component has a coding block size of (Ln / 2, Mn / 2). Hereinafter, a coding mode that can be selected for each coding block Bn in the nth hierarchical layer is expressed as m (Bn). In the case of a color video signal consisting of a large number of color components, the m (Bn) encoding mode can be formed in such a way that an individual mode is used for each color component. Hereinafter, an explanation will be made assuming that the coding mode m (Bn) indicates the one for the luminance component of each coding block having a 4: 2: 0 format in a YUV signal at least than on the other. specified side. The m (Bn) coding mode can be one of one or more intra-coding modes (generically referred to as "INTRA") or one or more inter-coding modes (generically referred to as "INTER"), and the part of coding control 1 selects, as the coding mode m (Bn), a coding mode with the highest degree of coding efficiency for each Bn coding block among all the coding modes available in the figure currently being processed or the sub -set of these encoding modes, as mentioned above. Each Bn coding block is further divided into one or more prognostic units (partitions) by the block dividing part, as shown in Fig. 9. Hereinafter, each partition belonging to each Bn coding block is expressed as Pin ( i shows a partition number in the nth hierarchical layer). As the division of each Bn coding block into Pin partitions belonging to the Bn coding block is performed it is included as information in the m (Bn) coding mode. While the prognosis process is carried out on each of all Pin partitions according to the coding mode m (Bn), an individual prognosis parameter can be selected for each Pin partition. The coding control part 1 produces such a block splitting state as shown in, for example, Fig. 10 for a coding block having the maximum size, and then determines Bn coding blocks. Portions shown with slanted lines shown in Fig. 10 (a) show a distribution of partitions in which the coding block having the maximum size is divided, and Fig. 10 (b) shows a situation in which coding modes m (Bn) are respectively assigned to the partitions generated through hierarchical layer division using a tree graph. Each node surrounded by a square shown in Fig. 10 (b) is a node (coding block Bn) to which a coding mode m (Bn) is assigned. When the coding control part 1 selects an optimal coding mode m (Bn) for each Pin partition of each coding block Bn, and the coding mode m (Bn) is an intra coding mode (step ST3), the selection switch 3 issues the Pin partition of the coding block Bn, which is generated by dividing by the block dividing part 2, for the intraprognostic part 4. Conversely, when the coding mode m (Bn) it is an inter-coding mode (step ST3), the selection switch emits the Pin partition of the coding block Bn, which is generated through the division by the block division part 2, for the compensated movement prognosis part 5. When receiving the Pin partition of the Bn coding block from the selection switch 3, the intra-prognosis part 4 performs an intra-prognostic process on the Pin partition of the Bn coding block using the intra-prognostic parameters corresponding to the mode of 5 m (Bn) coding selected by coding control part 1 to generate an intraprognostic Pin image (step ST4). The intra-prognosis part 4 outputs the intra-prognosis image Pin to the subtraction part 6 and the addition part 9 after generating the intra-prognosis image Pin, while emitting the intra-prognosis parameters to the coding part of variable length 13 to allow a moving image decoding device shown in Fig. 5 to generate the same intraprognostic Pin image. Although the intraprognosis process shown in this modality 1 is not limited to one according to an algorithm determined in the AVC / H.264 standards (ISO / IEC 14496-10), the intraprognosis parameters need to include information required for a moving image encoding device and a moving image decoding device to generate completely the same intraprognostic image. When receiving the Pin partition of the coding block Bn from the selection switch 3, and the coding mode m (Bn) selected by the coding control part 1 is an inter-coding mode which is a direct mode, the part of compensated motion prognosis 5 generates a spatial direct vector in the spatial direct mode from the motion vector of an already encoded block located in the vicinity of a Pin partition of the Bn coding block, and also generates a direct temporal vector in the direct temporal mode from the motion vector of an already coded figure that can be referred to by the coding block Bn. The compensated movement prognosis part 5 then selects a direct vector that provides a greater correlation between reference images from the spatial direct vector and the direct temporal vector, and performs a compensated movement prognostic process in the Pin partition of the coding block Bn using the selected direct vector and thereby, and the interprognosis parameters corresponding to the coding mode m (Bn) to generate a prognostic image (step ST5). Conversely, when the coding mode m (Bn) selected by coding control part 1 is an inter-coding mode other than direct modes, the compensated motion prognosis part 5 searches through the Pin partition of the coding block Bn and the reference image stored in the motion prediction board memory 12 compensated by a motion vector, and performs a motion prediction process compensated in the Pin partition of the Bn coding block using the motion vector and the inter parameters - prognosis corresponding to the m (Bn) coding mode to generate a prognostic image (step ST5). The compensated movement prognosis part 5 outputs the inter-prognosis image Pin to the subtraction part 6 and the addition part 9 after generating the inter-prognosis image Pin, while emitting the inter-prognosis parameters to the part of variable length encoding 13 to allow the moving image decoding device shown in Fig. 5 to generate the same Pin inter-prognostic image. The inter-prognosis parameters used to generate the inter-prognosis image include: - information on the way in which the division of the Bn coding block into partitions is described; - the movement vector of each partition; - reference image indication index information showing which reference image is used to make a prognosis when the compensated motion forecast frame memory 12 stores a large amount of reference images; - index information showing which predicted motion vector value is selected and used when there is a large amount of motion vector predicted value candidates; - Index information showing which filter is selected and used when there is a large number of motion compensation interpolation filters; and - selection information showing what pixel precision is used when the motion vector of a partition currently being processed can show a large number of degrees of pixel precision (half pixel, 1/4 of pixel, 1/8 of pixel, etc.). The inter-prognosis parameters are multiplexed in a sequence of bits by the variable length coding part 13 in order to allow a moving image decoding device to generate completely the same inter-prognostic image. The process outline performed by the compensated movement prognosis part 5 is as mentioned above, and the details of the process will be mentioned below. After the intra-prognosis part 4 or the compensated movement prognosis part 5 generates a prognosis image (an intra-prognosis Pin image or an inter-prognosis Pin image), the subtraction part 6 subtracts the prognosis image (the intra-prognosis image Pin or the inter-prognosis image Pin) generated by the intra-prognosis part 4 or the compensated movement prognosis part 5 from the Pin partition of the Bn coding block, which is generated through the division by the division part of block 2, to generate the difference image, and emits a prognosis difference signal ein showing the difference image for the transformation / quantization part 7 (step ST6). When receiving the prognostic difference signal and showing the difference image from subtraction part 6, the transformation / quantization part 7 performs a transformation process (eg, a DCT (discrete cosine transform) or a orthogonal transformation process, such as a KL transform, in which bases are assigned to a specific learning sequence 5 in advance) in the difference image in units of a block having the transformation block size included in the difference coding parameters of prognosis emitted in it from coding control part 1, and quantizes the transform coefficients of the difference image using the quantization parameter included in the prognosis difference coding parameters and outputs the transform coefficients from which quantization was applied by meanwhile, for the reverse quantization / reverse transformation part 8 and for the coding part of variable length 13 as compressed data of the difference image (step ST7). When receiving the compressed difference image data from transformation / quantization part 7, the inverse quantization / reverse transformation part 8 does reverse quantization of the compressed difference image data using the quantization parameter included in the difference coding parameters. prognosis emitted in it from the coding control part 1, performs an inverse transformation process (eg, an inverse DCT (inverse discrete cosine transform) or an inverse transformation process such as an inverse KL transform) in the compressed data to which reverse quantization was applied and thereby in units of a block having the transformation block size included in the prognosis difference coding parameters, and output the compressed data in which the reverse quantization / transformation part reverse effects the reverse transformation process as a sign of difference in prognosis decod local location in hat ("^" appended to an alphabetic letter is expressed by hat for reasons of restrictions in electronic applications) (step ST8). When receiving the local decoded prognosis difference signal in the hat from the reverse quantization / reverse transformation part 8, the addition part 9 adds the local decoded prognosis difference signal in the hat and the prognosis signal showing the prognosis image (the intra-prognosis Pin image or the inter-prognosis Pin image) generated by the intra-prognosis part 4 or the compensated motion prognosis part 5 to generate a local decoded image which is a local decoded partition image Pin hat or a local decoded block to be encoded, an image that is a group of local decoded partition images (step ST9). After generating the local decoded image, the addition part 9 stores a local decoded image signal showing the local decoded image in memory 10 for intraprognosis and also outputs the local decoded image signal to the loop filter part 11. O moving image encoding device repeatedly performs the steps from ST3 to ST9 until the moving image encoding device completes processing on all Bn encoding blocks in which the inserted image is hierarchically divided, and, when completing the processing in all Bn coding blocks, moves to a process in step ST12 (steps ST10 and ST11). The variable length coding part 13 entropy codes the compressed data emitted there from the transformation / quantization part 7, the coding mode (including information showing the state of the division in the coding blocks) and the coding parameters of difference in prognosis, which are issued on it from the coding control part 1, and the intraprognosis parameters issued on it from the intraprognosis part 4 or the interprognosis parameters issued on it from the part of coding compensated motion prognosis 5. The variable length encoding part 13 multiplexes encoded data which are the entropy encoded results of the compressed data, the encoding mode, the 5 prognosis difference encoding parameters, and the intra parameters -prognosis or inter-prognosis parameters to generate a bit sequence (step ST12). When receiving the local decoded image signal from the addition part 9, the loop filter part 11 compensates for an encoding distortion included in the local decoded image signal, and stores the local decoded image shown by the local decoded image signal in which the loop filter part compensates for the coding distortion in the compensated motion prognosis frame memory 12 as a reference image (step ST13). The loop filter part 11 can perform the filtration process for each coding block having the maximum size of the local decoded image signal emitted there from the addition part 9 or for each coding block. Alternatively, after the local decoded image signal corresponding to all macro blocks on a screen is output, the loop filter part can perform the filtering process on all macro blocks on a screen at a time. In the following, the processing performed by the compensated movement prognosis part 5 will be explained in detail. When the coding mode m (Bn) selected by the coding control part 1 is an inter-mode other than direct modes, the selection switch 21 of the compensated motion prognosis part 5 issues each of the Pin partitions in which the coding block Bn is divided by the block division part 2 for the motion vector search part 22. In contrast, when the coding mode m (Bn) is an inter-mode which is a direct mode, the selection issues each of the Pin partitions in which the Bn coding block is divided by the block division part 2 for the direct vector generation part 23. In this case, because the direct vector generation part 23 does not use each of the Pin partitions of the Bn coding block for 5 generation of a direct vector, the direct vector generation part does not have to output each of the Pin partitions of the Bn coding block for the direct vector generation part 23 even though the mode coding m (Bn) is an inte r-mode which is a direct mode. When receiving each of the Pin partitions of the Bn coding block from the selection switch 21, the motion vector search part 22 of the compensated motion prognosis part 5 searches for an optimal motion vector in the inter-mode while referring to the Pin partition and the reference image stored in the offset motion prediction board memory 12, and output the motion vector to the motion compensation processing part 24. Because the process of searching for an optimal motion vector in the inter- mode is a known technique, a detailed explanation of the process will be omitted hereinafter. When m (Bn) coding mode is a direct mode, the direct vector generation part 23 of the compensated motion prognosis part 5 generates both, a spatial direct vector in the spatial direct mode and a direct temporal vector in the direct temporal mode for each of the Pin partitions of the Bn coding block, and outputs both the spatial direct vector and the temporal direct vector to the motion compensation processing part 24 as a motion vector. Because the information showing the status of a division in the Pin partitions belonging to the Bn coding block is included in the m (Bn) coding mode, as mentioned above, the direct vector generation part 23 can specify each of the Pin partitions of the block encoding Bn referring to the encoding mode m (Bn). More specifically, the spatial direct vector generation part 31 of the direct vector generation part 23 reads the motion vector of an already encoded block located in the vicinity of each of the Pin 5 partitions of the Bn coding block among the motion vectors. block codes already stored in the motion vector memory not shown or in the internal memory not shown to generate a spatial direct vector in the spatial direct mode from the motion vector. In addition, the direct vector generation part 32 of the direct vector generation part 23 reads the motion vector of a spatially located block in the same position as each of the Pin partitions of the Bn coding block, which is the vector of motion of an already encoded figure that can be referred to by the Bn coding block, among the block motion vectors already encoded to generate a direct temporal vector in the direct temporal mode from the movement vector. Fig. 11 is a schematic diagram showing a method of generating a motion vector (direct temporal vector) in direct temporal mode. For example, a case in which an MB1 block in figure B2 is the Pin partition which is the target to be coded, and the MB1 block is encoded in direct temporal mode is taken as an example. In this example, the direct temporal vector generation part uses the motion vector MV from an MB2 block which is the motion vector of figure P3 closest to figure B2 among the already coded figures located in the backward direction with respect to figure B2 on the time axis, and that is spatially located in the same position as the MB1 block. This motion vector MV refers to figure P0, and motion vectors MVL0 and MVL1 that are used when coding the MB1 block are calculated according to the following equation (3). (3) After calculating the MVL0 and MVL1 motion vectors, the direct temporal vector generation part 32 outputs the MVL0 and MVL1 motion vectors for the direct vector determination part 33 as the direct temporal vectors in the direct temporal mode. Although as method 5 generates a direct temporal vector that the direct temporal vector generation part 32 uses, an H.264 method as shown in Fig. 11 can be used, this modality is not limited to this method and another method can be used alternatively. Fig. 12 is a schematic diagram showing the method of generating a motion vector (direct spatial vector) in direct spatial mode. In Fig. 12, currentMB denotes the Pin partition, which is the coding block. At this point, when the motion vector of a block already encoded A on a left side of the coding block is expressed as MVa, the motion vector of a block already encoded B on an upper side of the coding block is expressed as MVb, and the motion vector of a block already encoded C on an upper right side of the coding block is expressed as MVc, the direct spatial vector generation part can calculate the motion vector MV of the coding block by averaging these vectors of movement MVa, MVb, and MVc, as shown by the following equation (4). MV = median (MVa, MVb, MVc) (4) In the direct spatial mode, the direct spatial vector generation part determines the motion vector for each of list 0 and list 1. In this case, the vector generation part Spatial direct can determine the motion vector for both lists using the method mentioned above. After calculating the motion vector MV for both list 0 and list 1 in the manner mentioned above, the spatial direct vector generation part 31 issues the motion vector MV of list 0 and that of list 1 for the determination part of direct vector 33 with direct spatial vectors in direct spatial mode. Although with the method of generating a spatial direct vector that the 5 spatial direct vector generation part 31 uses, an H.264 method as shown in Fig. 12 can be used, this modality is not limited to this method and a another method can be used alternatively. For example, as shown in Fig. 13, the spatial direct vector generation part can select three motion vectors from a group of blocks A1 to An, a group of blocks B1 to Bn, and a group of blocks C, D, and E as candidates for medium prognosis, respectively, to generate a direct spatial vector. In addition, in a case of ref_Idx in which the candidates for MV that are used to generate a direct spatial vector differ from each other, the direct spatial vector generation part can scale according to the distance in the direction of the time, as shown in Fig. 14. (5) where scaled_MV denotes a scaled vector, MV denotes a motion vector yet to be scaled, and d (x) denotes the time distance to x. In addition, Xr denotes a reference image shown by the coding block, and Yr denotes the reference image shown by each of the block positions A to D which is the target to scale. After the spatial direct vector generation part 31 generates spatial direct vectors, the direct vector determination part 33 of the direct vector generation part 23 calculates a value evaluated in the spatial direct mode using the spatial direct vectors. After the direct temporal vector generation part 32 generates direct temporal vectors, the direct vector determination part 33 calculates a value evaluated in the direct temporal mode using the direct temporal vectors. The direct vector determination part 33 compares the value evaluated in the direct spatial mode with the value evaluated in the direct temporal mode, and selects a direct vector in a direct mode from the direct spatial vector and the direct temporal vector using a determination part 5 which will be mentioned below, and issues the direct vector to the motion compensation processing part 24. Hereinafter, the processing performed by the direct vector determination part 33 will be explained concretely. After the spatial direct vector generation part 31 generates the spatial direct vectors MVL0 and MVL1, the motion compensation part 41 of the direct vector determination part 33 generates a prognostic image list 0 in the spatial direct mode using the direct vector spatial MVL0, and also generates a list 1 of prognostic image in direct spatial mode using the direct spatial vector MVL1. Fig. 15 is an explanatory drawing showing an example of calculating an assessed value using the degree of similarity between the forward prognostic image and the forward prognostic image. In the example shown in Fig. 15, the motion compensation part generates a forward prognostic image like list 0 of prognostic image in direct spatial mode, and also generates a forward prognostic image like list 1 of prognostic image in direct spatial mode. After the direct temporal vector generation part 32 generates the direct temporal vectors which are the MV motion vectors of list 0 and list 1, the motion compensation part 41 still generates a prognostic image list 0 in the direct temporal mode using the forward temporal vector which is a forward motion vector MV, and also generates a list 1 of the prognostic image in forward temporal mode using the forward temporal vector which is a backward movement MV vector. In the example shown in Fig. 15, the motion compensation part generates a forward-looking prognostic image in the forward temporal mode as the 0 forward-looking prognostic image list, and also generates a forward-facing prognostic image as list 1 of the prognostic image in direct temporal mode. 5 Although in this example, the motion compensation part generates a forward prognosis image as the prognostic image list 0 using the reference image list 0 showing the reference image in a forward direction and also generates a forward image. prognosis backward as the prognosis image list 1 using the reference image list 1 showing the reference image in a backward direction, the motion compensation part may alternatively generate the prognostic image backward as the 0 prognosis image using the reference image list 0 showing the reference image in a backward direction and also generating the forward prognosis image as the prognosis image list 1 using the reference image list 1 showing the reference image reference in a forward direction. As an alternative, the motion compensation part can generate forward prognosis images like the prognosis image list 0 and the prognosis image list 1 using the reference image list 0 showing the reference image in one direction for forward and the reference image list 1 showing the reference image in a forward direction, respectively (this process will be mentioned in detail below). After the motion compensation part generates the prognostic image list 0 and the prognostic image list 1 in the direct spatial mode, the similarity calculation part 42 of the direct vector determination part 33 calculates a SADspatial evaluated value in the mode space, as shown in the following equation (6). For the sake of simplicity, list 0 of the prognostic image in the direct spatial mode is a prognostic image in the forward spatial view, and the list 1 of the prognostic image in the direct spatial mode is a prognostic image in the forward spatial mode in Equation (6) . SADspatial = | fspatial - gspatial | ... (6) In addition, after the motion compensation part generates 5 the prognostic image list 0 and the prognostic image list 1 in the direct temporal mode, the similarity calculation part 42 calculates a SADtemporal evaluated value in direct temporal mode, as shown in the following equation (7). For the sake of simplicity, list 0 of the prognostic image in direct temporal mode is a prognostic image for forward temporal, and list 1 of prognostic image in direct spatial mode is the prognostic image of forward temporal in Equation (7) . SADtemporal = | ftemporal - gtemporal | (7) The greater the difference between the forward prognostic image and the forward prognostic image is, the smaller the degree of similarity between the two images is (the assessed value SAD showing the sum of absolute differences between the two images becomes greater), and the lesser the temporal correlation between them is. Contrary to this, the smaller the difference between the forward prognostic image and the forward prognostic image is, the greater the degree of similarity between the two images is (the assessed value SAD showing the sum of absolute differences between the two images is small), and the greater the temporal correlation between them is. In addition, an image that is predicted from a direct vector needs to be an image that is similar to the coding block. Therefore, when prognostic images are generated using two vectors, respectively, the images that are predicted respectively from the vectors are expected to resemble the coding block, and this means that there is a high correlation between the two reference images. Therefore, by selecting a direct vector having a lower SAD evaluated value from the spatial direct vector and the direct temporal vector, the direct vector determination part can select a mode that provides a high correlation between reference images, and then can improve the accuracy of the direct mode. After the similarity calculation part 42 calculates both the SADspatial evaluated value in the direct spatial mode and the 5 SADtemporal evaluated value in the direct temporal mode, the direct vector selection part 43 of the direct vector determination part 33 compares the degree of similarity between the fspatial forward prognostic image and the gspatial forward prognostic image in the direct spatial mode with the degree of similarity between the ftemporal forward prognostic image and the gtemporal backward prognostic image in the direct temporal mode comparing the assessed SADspatial value with the assessed value SADtemporal. When the degree of similarity between the forward-looking prognostic image and the forward-looking spatial backward prognostic image is equal to or greater than the degree of similarity between the forward-looking prognostic image and the forward-looking prognostic image behind gtemporal in direct temporal mode (SADspatial <= SADtemporal), the direct vector selection part 43 selects the spatial direct vector generated by the spatial direct vector generation part 31, and outputs the spatial direct vector to the compensation processing part of motion 24 as a motion vector. On the contrary, when the degree of similarity between the forward-looking prognostic image and the forward-looking temporal prognostic image is greater than the degree of similarity between the forward-looking prognostic image and the forward-looking prognostic image After gspatial in direct spatial mode (SADspatial> SADtemporal), the direct vector selection part selects the direct temporal vector generated by the direct temporal vector generation part 32, and outputs the direct temporal vector for the motion compensation processing part 24 as a motion vector. When the coding mode m (Bn) is not a direct mode, and the motion compensation processing part 24 receives the motion vector from motion vector search part 22, the motion compensation processing part 24 performs a motion prediction process compensated on the basis of the interprognosis parameters emitted therein from the coding control part 5 1 using both the motion vector and a reference picture frame stored in the motion prognosis frame memory compensated 12 to generate a prognostic image. Conversely, when the coding mode m (Bn) is a direct mode and the motion compensation processing part 24 receives the motion vector (ie, the direct vector selected by the direct vector selection part 43) from the of direct vector generation 23, the motion compensation processing part 24 performs a motion prediction process compensated on the basis of the inter-prognosis parameters emitted there from the coding control part 1 using both the motion vector and a reference image frame stored in the motion compensated prognosis frame memory 12 to generate a prognosis image. Because the offset motion prediction process performed by the motion offset processing part 24 is a known technique, a detailed explanation of the offset motion prediction process will be omitted hereinafter. Although the example in which the similarity calculation part 42 calculates the evaluated value SAD which is the sum of absolute differences between the two images both in direct temporal mode and in spatial direct mode and the direct vector selection part 43 compares the value evaluated SAD in the direct temporal mode with that in the spatial direct mode is shown, the similarity calculation part 42 can alternatively calculate the sum of the squares of SSE differences between the forward prognosis image and the forward prognostic image both in the direct mode temporal and direct spatial mode as evaluated values, and the direct vector selection part 43 can compare the sum of the squares of SSE differences in the temporal direct mode with that in the spatial direct mode. While the use of SSE increases the amount of information to be processed, the degree of similarity can be calculated more correctly. 5 Next, the processing performed by the image decoding device shown in Fig. 5 will be explained. When receiving the sequence of bits emitted thereon from the image encoding device of Fig. 1, the variable length decoding part 51 performs a variable length decoding process in the bit sequence to decode the frame size in units of a sequence consisting of one or more frames of figures or in units of a figure (step ST21 of Fig. 8). The variable length decoding part 51 determines a maximum size of each of the coding blocks which is a unit to be processed at a time when a compensated motion prediction process (inter-frame prognosis process) or an intra process -prognosis (intra-frame prognosis process) is performed according to the same procedure as that which the coding control part 1 shown in Fig. 1 uses, and also determines an upper limit on the number of hierarchical layers in a hierarchy in the each of the coding blocks having the maximum size is hierarchically divided into blocks (step ST22). For example, when the maximum size of each encoding block is determined according to the resolution of the image entered in the image encoding device, the variable length decoding part determines the maximum size of each of the encoding blocks in the framesize bases that the variable-length decoding part decoded previously. When information showing both the maximum size of each of the coding blocks and the upper limit on the number of hierarchical layers is multiplexed in the bit stream, the variable length decoding part refers to the information that is acquired by decoding the bit stream. Because the information showing the division status of each of the B0 coding blocks having the maximum size is included in the coding mode m (B0) of the B0 coding block having the maximum size that is multiplexed in the bit stream, the part variable length decoding module 51 specifies each of the Bn encoding blocks into which the image is hierarchically divided by decoding the bit sequence to acquire the encoding mode m (B0) of the B0 encoding block having the maximum size that is multiplexed in the sequence bits (step ST23). After specifying each of the Bn encoding blocks, the variable length decoding portion 51 decodes the bit sequence to acquire the m (Bn) encoding mode of the Bn encoding block to specify each Pin partition belonging to the Bn encoding block in bases of information about the Pin partition belonging to the m (Bn) coding mode. After specifying each Pin partition belonging to the Bn encoding block, the variable length decoding part 51 decodes the encoded data to acquire the compressed data, the encoding mode, the prognosis difference encoding parameters, and the intramural parameters. prognosis / inter-prognosis parameters for each Pin partition (step ST24). When the m (Bn) encoding mode of a Pin partition belonging to the Bn encoding block, which is specified by the variable length decoding part 51, is an intra-encoding mode (step ST25), the selection switch 52 emits the intraprognosis parameters emitted in it from the variable length decoding part 51 to the intraprognosis part 53. In contrast, when the m (Bn) encoding mode of a Pin partition is an inter-encoding mode (step ST25), the selection switch emits the interference parameters predictions issued on it from the variable length decoding part 51 to the compensated motion prognosis part 54. When receiving the intraprognosis parameters from the selection switch 52, the intraprognosis part 53 performs an intra - 5 prognosis in the Pin partition of the Bn coding block using the intraprognosis parameters to generate an intraprognosis Pin image (step ST26). When receiving the inter-prognosis parameters from the selection switch 52 and the encoding mode m (Bn) emitted from it from the variable length decoding part 51 is an inter-encoding mode which is a direct mode, the part of compensated movement prognosis 54 generates a direct spatial vector in direct spatial mode and a direct temporal vector in direct temporal mode, as the part of compensated movement prognosis 5 shown in Fig. 1. After generating a direct spatial vector in direct spatial mode and a direct temporal vector in direct temporal mode, the compensated motion prognosis part 54 selects a direct vector that provides a greater correlation between reference images from the direct spatial vector and the direct temporal vector, such as the movement prognosis part compensated 5 shown in Fig. 1, and performs a compensated motion prognosis process in the Pin partition of the Bn coding block using the direct vector selected therewith, and the inter-prognosis parameters to generate an inter-prognosis Pin image (step ST27). In contrast, when the m (Bn) encoding mode emitted there from the variable length decoding part 51 is an inter-encoding modes other than direct modes, the motion compensation processing part 63 of the prognostic part motion compensator 54 performs a motion prediction process compensated in the Pin partition of the coding block Bn using the motion vector included in the interprognosis parameters emitted there from the selection switch 52 to generate an interprognosis image Pin (step ST27). The reverse quantization / reverse transformation part 55 does the reverse quantization of the compressed data associated with the coding block, which are output from it from the variable length decoding part 51, using the quantization parameter included in the difference coding parameters. of prognosis emitted in it from the variable length decoding part 51, and performs a reverse transformation process (eg, a reverse DCT (reverse discrete cosine transform) or a reverse transformation process such as a Inverse KL) in the compressed data to which inverse quantization has been applied thereby, in units of a block having the transformation block size included in the prognosis difference coding parameters, and output the compressed data in which the inverse quantization part / reverse transformation performs the reverse transformation process for addition part 56 with a different sign decoded prognostic difference (signal showing a pre-compressed difference image) (step ST28). When receiving the decoded prognostic difference signal from the reverse quantization / reverse transformation part 55, the addition part 56 generates a decoded image by adding the decoded prognostic difference signal and the prognostic signal showing the prognostic image generated by the part intra-prognosis 53 or the compensated motion prognosis part 54 and stores the decoded image signal showing the decoded image in memory 57 for intra-prognosis, and also outputs the decoded image signal to the loop filter part 58 ( step ST29). The moving image decoding device repeatedly performs the steps from steps ST23 to ST29 until the moving image decoding device completes processing on all Bn encoding blocks into which the image is divided hierarchically (step ST30). When receiving the decoded image signal from the addition part 56, the loop filter part 58 compensates for an encoding distortion included in the decoded image signal 5, and stores the decoded image shown by the decoded image signal in which the decoding image part. Loop filter compensates for the coding distortion in the compensated motion prognosis frame memory 59 as a reference image (step ST31). The loop filter part 58 can perform the filtration process for each coding block having the maximum size of the local decoded image signal emitted there from the addition part 56 or for each coding block. As an alternative, after the local decoded image signal corresponding to all macro blocks on a screen is output, the loop filter part can perform the filtering process on all macro blocks on a screen at a time. As can be seen from the description above, the moving image encoding device according to this embodiment 1 is constructed in such a way that the moving image encoding device includes: the coding control part 1 to determine a maximum size of each of the coding blocks that is a unit to be processed at a time when a prognosis process is carried out, and also to determine an upper limit of hierarchical number in the number of hierarchical layers in a hierarchy in which each of the coding blocks having the maximum size is hierarchically divided into blocks, and selecting a suitable coding mode for each of the coding blocks in which each coding block having the maximum size is divided hierarchically from one or more available coding modes ; and the block dividing part 2 to divide an image inserted into coding blocks each having the maximum size determined by coding control part 1, and also dividing each of the coding blocks hierarchically until their hierarchical number reaches the limit upper part of the hierarchical number determined by the coding control part 1, and, when an inter-coding mode that is a direct mode is selected by the coding control part 1 as a suitable coding mode for one of the coding blocks in the which the introduced image is divided by the block division part 2, the compensated motion prognosis part 5 generates a spatial direct vector in a spatial direct mode from the motion vector of an already coded block located in the vicinity of the coding block and also generates a direct temporal vector in a direct temporal mode from the motion vector of an already encoded figure that can be referred to p the coding block, selects a direct vector that provides a greater correlation between reference images from the direct spatial vector and the direct temporal vector, and performs a compensated motion prognosis process in the coding block using the direct vector to generate a prognostic image. Therefore, an advantage is provided of being able to select an optimum direct mode for each predetermined block unit, and it reduces the amount of code. In addition, the moving image decoding device according to this embodiment 1 is constructed in such a way that the moving image decoding device includes the variable length decoding part 51 for variable length decoding the encoded data for acquire the compressed data and the encoding mode associated with each of the encoding blocks in which an image is hierarchically divided from the encoded data multiplexed in a bit stream, and, when the encoding mode associated with a decoded encoding block by variable length by the variable length decoding part 51 is an inter-coding mode that is a direct mode, the compensated motion prognosis part 54 generates a spatial direct vector in the spatial direct mode from the motion vector of an already decoded block located in the vicinity of the coding block and it also generates a direct temporal vector in direct temporal mode from the motion vector of an already decoded figure that can be referred to by the coding block, selects a direct vector that provides a greater correlation between reference images from the direct spatial vector and the direct temporal vector, and performs a motion prediction process compensated in the coding block using the direct vector to generate a prognostic image. Therefore, an advantage is provided to make it possible for a moving image decoding device to decode the encoded data which allows selection of an optimal direct mode for each fixed block unit. Mode 2. In Mode 1 mentioned above, the example in which each of the compensated movement prognosis parts 5 and 54 (specifically, the similarity calculation part 42) calculates the degree of similarity between the forward-looking prognostic image and the gspatial backward prognostic image in the spatial direct mode as an evaluated value SADspatial backward in the spatial direct mode while calculating the degree of similarity between the ftemporal forward prognostic image and the backward gtemporal prognostic image in the direct temporal mode as an evaluated value SADtemporal in direct temporal mode is shown, each of the compensated motion prognostic parts can alternatively calculate a sigma of (spatial) variance of the motion vectors of already encoded blocks (decoded blocks) located in the vicinity of a Bn coding block as a value evaluated in the direct spatial mode while calculating a sigma of variance (temporal) of the vectors of m movement of already encoded blocks (decoded blocks) located in the vicinity of a spatially located block in the same position as the Bn coding block in a coded figure (decoded figure) that can be referred to by the Bn coding block as a value evaluated in direct mode temporal. 5 This modality can provide the same advantages as those provided by Modality 1 mentioned above. More specifically, similarity calculation part 42 calculates a sigma of variance (spatial) of the already encoded block motion vectors (decoded blocks) located in the vicinity of the Bn coding block as the evaluated value SADspatial in the direct spatial mode (refer to the following equation (8)), as shown in Fig. 16 (a), instead of calculating the degree of similarity between the prognostic image for fspatial forward and the prognostic image for gspatial backwards in direct spatial mode. In addition, similarity calculation part 42 calculates a sigma of variance (temporal) of the motion vectors of already encoded blocks (decoded blocks) located in the vicinity of a spatially located block in the same position as the Bn coding block in a figure coded (decoded figure) that can be referred to by the Bn coding block as the evaluated value SADtemporal in direct temporal mode (refer to the following equation (8)), as shown in Fig. 16 (b), instead of calculating the degree of similarity between the forward-looking prognostic image and the forward-looking prognostic image in direct temporal mode. (8) where MVm, i is the motion vector of an adjacent block, and MVm is the average of the motion vectors of the adjacent blocks. In addition, m is the symbol showing spatial or temporal. The direct vector selection part 43 compares the sigma of variance (temporal) of the motion vectors with the sigma of variance (spatial) of the motion vectors, and when sigma of variance (temporal) of the motion vectors is equal to or greater than the sigma of variance (spatial) of the motion vectors, it determines that the reliability of the motion vector in mode spatial direct (spatial direct vector) is low, and 5 selects the motion vector in direct temporal mode (direct temporal vector). Conversely, when the sigma of variance (spatial) of the motion vectors is greater than the sigma of variance (temporal) of the motion vectors, the direct vector selection part determines that the reliability of the motion vector in the direct temporal mode (direct temporal vector) is low, and selects the motion vector in direct spatial mode (direct spatial vector). Although the example in which each of the compensated movement prognostic parts generates both the direct temporal vector and the spatial direct vector and selects both of the direct vectors is shown in Modality 1 mentioned above, each of the compensated movement prognostic parts can add another vector, such as a candidate vector, in addition to the direct temporal vector and the direct spatial vector, and select a direct vector from these candidate vectors. For example, each of the compensated motion prognostic parts can add spatial vectors MV_A, MV_B, and MV_C, and temporal vectors MV_1 to MV_8 as shown in Fig. 17 to the candidate vectors, and select a direct vector from these spatial and temporal vectors. In addition, as shown in Fig. 18, each of the compensated motion prognostic parts can generate a vector from a large number of vectors already encoded, and add the vector to the candidate vectors. While such an increase in the number of candidate vectors increases the amount of information to be processed, the accuracy of the direct vector can be improved and therefore the coding efficiency can be improved. Although no mention is made particularly in the Mode 1 mentioned above, candidates for the direct vector can be determined on a per slice basis. Information showing which vectors are to be selected as candidates is multiplexed in each slice header. For example, a method of, because the effect of a temporal vector is low on a video that is acquired using a camera, removes temporal vectors from candidates for selection for such a video, and, because the effect of a vector Spatial is great in a video that is acquired by a fixed camera, adding spatial vectors to candidates for selection for such a video. The greater the number of candidate vectors is, the closer to the original image a prognostic image can be generated, a balance between the amount of information to be processed and the coding efficiency can be achieved by determining the candidates taking into account the location of the video, such as excluding ineffective vectors from candidates in advance, in order to prevent the amount of information being processed from increasing enormously due to the increase in the number of candidate vectors. Switching a vector between a candidate and a non-candidate is achieved using, for example, a method of providing an ON / OFF flag for each vector, and defining only one vector whose flag is set to ON as a candidate. A motion vector that can be a candidate for selection can be switched between a candidate and a non-candidate using each slice header or each header in an upper layer, such as each sequence header or each figure header. In addition, one or more sets of motion vectors each of which may be a candidate for selection can be prepared, and an index showing each of the sets of candidates can be coded. In addition, a vector can be switched between a candidate and a non-candidate for each macro block or each coding block. Switching a vector between a candidate and a non-candidate for each macro block or each coding block can provide the localized macro block or coding block, and provides an advantage of improving coding efficiency. In addition, selection candidates can be uniquely determined for each block size of the partition. Because the spatial correlation in general becomes weak as the block size becomes small, it can be expected that the prognostic accuracy of a vector determined through the average prognosis gets worse. To solve this problem, removing a motion vector determined through an average prognosis from the candidates, for example, the amount of information to be processed can be reduced without lowering the coding efficiency. Although the explanation is made in Modality 1 mentioned above, assuming the case in which both a direct temporal vector and a direct spatial vector exist, there is a case in which no motion vector exists when an intra-coding process is performed in the coding block. Bn. In this case, a method of setting a zero vector as a motion vector can be considered, a method of not including any motion vector in the candidates, and so on. While the coding efficiency can be improved because candidates increase in number when a zero vector is configured as a motion vector, the amount of information to be processed increases. When no motion vector is included in candidates for a direct vector, the amount of information to be processed can be reduced. Although the example of generating a direct vector is shown in Mode 1 mentioned above, the direct vector can be used as a predicted vector that is used to encode a normal motion vector. While the amount of information to be processed increases when the direct vector is used as a predicted vector, the coding efficiency can be improved because the accuracy of the prognosis increases. Although the example of calculating a SAD evaluated value from a combination of an image located behind the Bn encoding block in time and an image located in front of the Bn encoding block in time is shown in Mode 1 mentions above 5 (reference to Fig. 15), a SAD evaluated value can alternatively be calculated from a combination of only images located behind the Bn coding block in time, as shown in Fig. 19. As an alternative, a SAD evaluated value can be calculated from a combination of only images located in front of the Bn encoding block in time. In this case, time vectors are expressed by the following equations (9) and (10). Where vˆ0 is the vector of list 0, and vˆ1 and vector of list 1 In the equations above, d denotes a temporal distance, d0 denotes the temporal distance of a reference image list 0, and d1 denotes the temporal distance of a list 0 reference image. In addition, vcol and dcol denote the vector of a block spatially located at the same position in the reference image as the coding block, and the temporal distance of the reference image shown by the vector, respectively. Even in a case in which the two lists of reference images indicate the same reference image, the same method as that shown in Fig. 19 can be applied when each of the lists has two or more reference images. Although the case in which each of the two reference image lists has two or more reference images is assumed in the Modality 1 mentioned above, it can be considered a case in which only one reference image is included in each of the two reference image lists. In this case, when the same reference image is configured for the two lists of reference images, there may be a case 5 in which the determination can be performed using only a spatial vector without using any temporal vector. When different reference images are configured for the two reference image lists, respectively, the determination can be handled using the method mentioned above. Although a two-way prognosis process is assumed to be performed in the Mode 1 mentioned above, a prognosis process in only one direction can be performed alternatively. When a prognosis of a vector in one direction is performed, information showing which vector is used is encoded and transmitted. As a result, a problem, such as occlusion, can be treated, and a contribution to an improvement in prognosis accuracy can be made. Although it is assumed in a direct mode shown in Modality 1 it mentions above that a prognosis using two vectors is performed, the number of vectors can be three or more. In this case, for example, a method of generating a prognostic image using all candidate vectors can be considered, each of which provides a provides an SAD evaluated value equal to or less than a Th limit, among a large amount of candidate vectors. . In addition, a reference image list number whose number is equal to the number of vectors can be stored. In addition, instead of using all candidates each of which provides an SAD rated value equal to or less than the Th limit, a maximum of the number of vectors that are used can be pre-configured for each slice header or the similar, and a prognostic image can be generated using the maximum number of vectors each of which provides a lower evaluated value. It is generally known that performance is further improved with the increase in the number of reference images used to generate a prognostic image. Therefore, as the amount of information to be processed increases, a contribution to an improvement in coding efficiency can be made. A vector is determined from the evaluations between reference images in Mode 1 mentioned above. This evaluation can be performed from a comparison between an already encoded image that is spatially adjacent to the coding block and the reference image. In this case, a method of performing the evaluation using such an L-shaped image as shown in Fig. 20 can be considered. In addition, when an already encoded image that is spatially adjacent to the encoding block is used, there is a possibility that the already encoded image is not in time for comparison because of piping processing. In this case, a method of using the prognostic image can be considered instead of the already encoded image. Although the example in which the size of the coding block Bn is L = Mn as shown in Fig. 9 is shown in Mode 1 mentions above, the size of the coding block Bn can be Ln ≠ Mn. For example, a case can be considered in which the size of the Bn coding block is Ln = kMn as shown in Fig. 21. In this case, (Ln + 1, Mn + 1) becomes equal to (Ln, Mn) in next division, and subsequent divisions can be carried out in the same way as shown in Fig. 9 or in such a way that (Ln + 1, Mn + 1) becomes equal to (Ln / 2, Mn / 2) (refer to Fig. 22). As an alternative, one of the division process shown in Fig. 21 and that shown in Fig. 22 can be selected as shown in Fig. 23. In the case in which one of the division process shown in Fig. 21 and that shown in Fig. 22 can be selected, a flag showing which division process is selected is coded. Because this case can be implemented using a method of connecting blocks each consisting of 16 × 16 elements each to each other in a horizontal direction, such as H.264 disclosed by the nr1 non-patent reference, compatibility with the existing method can be maintained. Although the case in which the size of the 5 Bn coding block is Ln = kMn is shown in the explanation mentioned above, it goes without saying that those divisions can be performed on the same principle even if blocks are connected to each other in a vertical direction , as in a case of kLn = Mn. Although the transformation / quantization part 7 and the inverse quantization / transformation part 8 and 55 perform transformation processes (reverse transformation processes) in units of a block having the transformation block size included in the difference coding parameters of prognosis in Modality 1 mentions above, each block size transformation unit can be uniquely determined by a part of the transformation process, or it can be formed to have a hierarchical structure as shown in Fig. 24. In this case, a flag showing whether the division is performed or not for each hierarchical layer is coded. The division mentioned above can be performed for each partition or for each coding block. Although the transformation mentioned above is assumed to be carried out in units of a square block, the transformation can alternatively be carried out in units of a square block such as a rectangular block. Modality 3. Although the example in which each of the direct vector generation parts 23 and 62 of the compensated motion prognosis parts 5 and 54 both generates a spatial direct vector and a direct temporal vector is shown in Modality 1 mentions above, each one of the direct vector generation parts may alternatively determine an initial search point when generating both a spatial direct vector and a temporal direct vector, and search through the vicinity of the initial search point to determine a direct vector. Fig. 25 is a block diagram showing a compensated motion prognosis part 5 of a motion picture encoding device according to Mode 3 of the present invention. In the figure, because the same reference numerals as those shown in Fig. 2 denote the same or similar components, the explanation of the components will be omitted hereinafter. The direct vector generation part 25 performs a process of generating both a spatial direct vector and a direct temporal vector. Fig. 26 is a block diagram showing the direct vector generation part 25 that builds the compensated motion prognosis part 5. Referring to Fig 26, an initial vector generation part 34 performs a process of generating an initial vector a from the motion vector of a block already encoded. A motion vector search part 35 performs a search process through the vicinity of an initial search point shown by the initial vector generated by the initial vector generation part 34 to determine a direct vector. Fig. 27 is a block diagram showing the initial vector generation part 34 that builds the direct vector generation part 25. Referring to Fig 27, the spatial vector generation part 71 performs a process of generating a spatial vector a from the motion vector of a block already encoded using, for example, the same method as that which the generation of spatial direct vector 31 shown in Fig. 3 uses. The time vector generation part 72 performs a process of generating a time vector from the motion vector of a block already encoded using, for example, the same method as that of the time vector generation part 32 shown in Fig. 3 uses. An initial vector determination part 73 performs a process of selecting any of the space vector generated by the space vector generation part 71 and the time vector generated by the time vector generation part 72 as an initial vector. Fig. 28 is a block diagram showing the initial vector determination part 73 that builds the initial vector generation part 5 34. Referring to Fig 28, a motion compensation part 81 performs a process of generating a list 0 of prognostic image in a direct spatial mode, a prognostic image list 1 in a direct spatial mode, a prognostic image list 0 in a direct temporal mode, and a prognostic image list 1 in a direct temporal mode using the same method than the one that the movement compensation part 41 shown in Fig. 4 uses. The similarity calculation part 82 performs a process of calculating the degree of similarity between the prognostic image list 0 and the prognostic image list 1 in the spatial direct mode as a spatial evaluated value and also calculating the degree of similarity between the prognostic image list 0 and prognostic image list 1 in direct temporal mode as a time evaluated value using the same method as that of similarity calculation part 42 shown in Fig. 4. An initial vector determination part 83 performs a process of making a comparison between the spatial evaluated value and the temporal evaluated value which are calculated by the similarity calculation part 82 to select the spatial vector or the temporal vector according to the result of the comparison. Fig. 29 is a block diagram showing a compensated motion prognosis part 54 of a moving image decoding device according to Mode 3 of the present invention. In the figure, because the same reference numerals as those shown in Fig. 6 denote the same or similar components, the explanation of the components will be omitted hereinafter. A direct vector generation part 64 performs a process of generating both a spatial direct vector and a temporal direct vector. The internal structure of the direct vector generation part 64 is the same as the direct vector generation part 25 shown in Fig. 25. Next, the operation of the moving image encoding device 5 and the operation of the decoding device moving image will be explained. Because the moving image encoding device and moving image decoding device according to this modality have the same structures as those under Modality 1 mentioned above, with the exception that the direct vector generation parts 23 and 62 of the compensated movement prognostic parts 5 and 54 according to the Modality 1 mentioned above are replaced by the direct vector generation parts 25 and 64, when compared to the Modality 1 mentioned above, only processes performed by each of the parts of direct vector generation 25 and 64 will be explained hereinafter. Because the process performed by the direct vector generation part 25 is the same as that performed by the direct vector generation part 64, the process performed by the direct vector generation part 25 will be explained hereinafter. The initial vector generation part 34 of the direct vector generation part 25 generates an initial MV_first vector from the motion vector of an already encoded block. More specifically, the space vector generation part 71 of the initial vector generation part 34 generates a space vector from the motion vector of a block already encoded using, for example, the same method as that of the generation vector part spatial direct vector 31 shown in Fig. 3 uses. As an alternative, the space vector generation part can generate a space vector using another method. The time vector generation part 72 of the initial vector generation part 34 generates a time vector from the motion vector of a block already encoded using, for example, the same method as that of the direct vector generation part temporal 32 shown in Fig. 3 uses. As an alternative, the time vector generation part can generate a time vector using another method. After the space vector generation part 71 generates a space vector and a time vector generation part 72 generates a time vector 5, the initial vector determination part 73 of the initial vector generation part 34 selects a vector with a initial vector MV_first from the space vector and the time vector. More specifically, the motion compensation part 81 of the initial vector determination part 73 generates a prognostic image list 0 in the spatial direct mode, a prognostic image list 1 in the spatial direct mode, a prognostic image list 0 in the direct temporal mode, and a prognostic image list 1 in the direct temporal mode using the same method as that which the motion compensation part 41 shown in Fig. 4 uses. The similarity calculation part 82 of the initial vector determination part 73 calculates the degree of similarity between the prognostic image list 0 and the prognostic image list 1 in the spatial direct mode with a spatial evaluated value, and also calculates the degree of similarity between the prognostic image list 0 and the prognostic image list 1 in direct temporal mode with a temporal evaluated value using the same method as that which the similarity calculation part 42 shown in Fig. 4 uses. The initial vector determination part 83 of the initial vector determination part 73 refers to the result of the comparison between the spatial evaluated value and the temporal evaluated value that are calculated by the similarity calculation part 82, and selects a vector that provides a greater degree of similarity between prognostic images from the spatial vector and the temporal vector. After the initial vector generation part 34 generates the initial vector MV_first, the motion vector search part 35 of the direct vector generation part 25 searches through a ± n interval centered on an initial search point (block) shown by the initial vector MV_first, as shown in Fig. 30, to determine a direct vector. The motion vector search part can perform an assessment at the time of the search by performing, for example, the same process as that performed 5 by the similarity calculation part 82 shown in Fig. 28. In this case, when the position shown by the vector initial expression is expressed as v, the motion vector search part calculates an evaluated SAD value at the time of the search, as shown in the following equation (11). (11) In this case, the search interval of n can be fixed or can be determined for each header in an upper layer such as each slice header. In addition, although the range (search range) of the search point is assumed to be a square, the range can alternatively be a rectangle or a quadrilateral such as a rhombus. After calculating the SAD evaluated value at the time of the search, the motion vector search part 35 emits a motion vector in the search interval that provides the lowest SAD evaluated value for the motion compensation processing part 24 as a direct vector . Although the example in which each of the compensated movement prognostic parts generates both a direct temporal vector and a spatial direct vector and selects any of the direct vectors is shown in Modality 3 mentioned above, each of the compensated movement prognostic parts can add another vector, such as a candidate vector, in addition to the direct temporal vector and the direct spatial vector, and select a direct vector from these candidate vectors. For example, each of the compensated motion prognostic parts can add the spatial vectors MV_A, MV_B, and MV_C, and time vectors MV_1 to MV_8 as shown in Fig. 17 for the candidate vectors, and select a direct vector from these vectors spatial and temporal vectors. In addition, each of the compensated motion prognostic parts can generate a vector from a large number of encoded vectors, and add the vector to the candidate vectors, as shown in Fig. 18. As such an increase in the number of candidate vectors the amount of information to be processed increases, the accuracy of the direct vector can be improved and then the coding efficiency can be improved. In this modality 3, candidates for the direct vector can be determined on a per slice basis. Information showing which vectors are to be selected as candidates is multiplexed in each slice header. For example, it can be considered a method of, because the effect of a time vector is low in a video that is acquired using a camera, removing time vectors from candidates for selection for such a video, and, because the effect of a space vector it is great on a video that is acquired by a fixed camera, adding spatial vectors for candidates to the selection for such a video. The greater the number of candidate vectors is, the closer to the original image a prognostic image can be generated, a balance between the amount of information to be processed and the coding efficiency can be achieved by determining the candidates taking into account the location of the video , such as excluding ineffective vectors from candidates in advance, in order to prevent the amount of information being processed from increasing enormously due to the increase in the number of candidate vectors. Switching a vector between a candidate and a non-candidate is achieved using, for example, a method of providing an ON / OFF flag for each vector, and defining only one vector whose flag is set to ON as a candidate. A motion vector that can be a candidate for selection can be switched between a candidate and a non-candidate using each slice header or each header in an upper layer, such as each sequence header or each figure header. In addition, one or more sets of motion vectors each of which may be a candidate for selection can be prepared, and an index showing each of the candidate sets can be coded. 5 In addition, a vector can be switched between a candidate and a non-candidate for each macro block or each coding block. Switching a vector between a candidate and a non-candidate for each macro block or each coding block can provide the localized macro block or coding block, and provides an advantage of improving coding efficiency. In addition, selection candidates can be uniquely determined for each block size of the partition. Because the spatial correlation in general becomes weak as the block size becomes small, it can be expected that the prediction accuracy of a vector determined through an average prognosis gets worse. To solve this problem, by removing a motion vector determined through an average prognosis from the candidates, for example, the amount of information to be processed can be reduced without lowering the coding efficiency. Although the explanation is made in this modality 3 assuming the case in which both a direct temporal vector and a direct spatial vector exist, there is a case in which no motion vector exists when an intra-coding process is performed in the Bn coding block. In this case, a method of setting a zero vector as a motion vector can be considered, a method of not including any motion vector in the candidates, and so on. While the coding efficiency can be improved because candidates increase in number when a zero vector is configured as a motion vector, the amount of information to be processed increases. When no motion vector is included in candidates for a direct vector, the amount of information to be processed can be reduced. Although the example of generating a direct vector is shown in this modality 3, the direct vector can be used as a predicted vector that is used to encode a normal motion vector. While the amount of information to be processed increases when the direct vector is used as a predicted vector, the coding efficiency can be improved because the accuracy of the prognosis increases. Although the example of calculating an evaluated SAD value from a combination of an image located behind the time coding block Bn and an image located in front of the time coding block Bn is shown in this mode 3 (refer to Fig. 15), a SAD evaluated value can alternatively be calculated from a combination of only images located behind the Bn coding block in time, as shown in Fig. 19. As an alternative, a SAD evaluated value can be calculated from a combination of only images located ahead of the Bn encoding block in time. In this case, the time vectors are expressed by the following equations (12) and (13). Where vˆ0 is the vector of list 0, and vˆ1 is the vector of list 1 In the equations above, d denotes a time distance, d0 denotes a time distance from a reference image list 0, and d1 denotes a time distance from a list 0 of reference image. In addition, vcol and dcol denote the vector of a block spatially located at the same position in the reference image as the coding block, and the temporal distance of the reference image shown by the vector, respectively. Even in a case in which the two lists of reference images indicate the same reference image, the same method as that shown in Fig. 19 can be applied. 5 Although the case in which each of the two reference image lists has two or more reference images is assumed in this modality 3, it can be considered a case in which only one reference image is included in each of the two image lists of reference. In this case, when the same reference image is configured for the two lists of reference images, there may be a case in which the determination can be performed using only one spatial vector without using any temporal vector. When different reference images are configured for the two reference image lists, respectively, the determination can be handled using the method mentioned above. Although a two-way prognosis process is assumed to be carried out in this modality 3, a prognosis process in only one direction can be performed alternatively. When a prognosis of a vector in one direction is performed, information showing which vector is used is encoded and transmitted. As a result, a problem, such as occlusion, can be treated, and a contribution to an improvement in prognostic accuracy can be made. Although it is assumed in this modality 3 that a prognosis using two vectors is performed, the number of vectors can be three or more. In this case, for example, a method of generating a prognostic image using all candidate vectors can be considered, each of which provides an SAD evaluated value equal to or less than a Th limit, among a large number of candidate vectors. In addition, instead of using all candidates each of which provides an SAD rated value equal to or less than the Th limit, a maximum of the number of vectors that are used can be pre-configured for each slice header or the similar, and a prognostic image can be generated using the maximum number of vectors each of which provides a lower evaluated value. A vector is determined from an evaluation between 5 reference images in this modality 3. This evaluation can be performed from a comparison between an already encoded image that is spatially adjacent to the coding block and a reference image. In this case, a method of carrying out the evaluation using such an L-shaped image as shown in Fig. 20 can be considered. In addition, when an already encoded image that is spatially adjacent to the encoding block is used, there is a possibility that the already encoded image is not in time for comparison because of piping processing. In this case, a method of using the prognostic image can be considered instead of the already encoded image. Although the example of searching for a motion vector after determining an initial vector is shown in this mode 3, whether or not to search for a motion vector using a flag can be determined on a per slice basis. In this case, while the coding efficiency is reduced, an advantage is provided of being able to greatly reduce the amount of information to be processed. The flag can be provided on a per slice basis or can be determined for each sequence, each figure or the like in an upper layer. When the flag is in an OFF state and no motion search is performed, the same operation as that according to the Mode 1 mentioned above is performed. Although it is assumed in this modality 3 that each of the direct vector generation parts 25 and 64 performs the vector generation process regardless of the block size, this process can be limited to a case in which the block size is equal to or smaller than a pre- certain block size. A flag showing whether or not to limit the process to the case in which the block size is equal to or less than the predetermined block size, and information showing the predetermined block size can be multiplexed in each header in an upper layer such as each slice header. The flag and information can be changed according to a maximum CU size. There is a tendency for the correlation between reference images to become low and for errors to become large as the block size becomes small. As a result, there are many cases where whichever vector is selected, performance is severely affected, and an advantage is provided of reducing the amount of information to be processed without reducing the coding performance by shutting down processes using large block sizes. Modality 4. In Modality 1 it mentions above, the example in which each of the compensated movement prognosis parts 5 and 54 generates a direct spatial vector in the direct spatial mode from the movement vector of an already coded block (already decoded block) located in the vicinity of the coding block and also generates a direct temporal vector in direct temporal mode from the motion vector of an already encoded figure (already decoded block) that can be referred to by the coding block, and selects a direct vector that provides a greater correlation between reference images from the direct spatial vector and the direct temporal vector is shown. The compensated motion prognosis part 5 of the moving image coding device can alternatively select a motion vector suitable for generating a prognostic image and perform a compensated motion prognosis process in the coding block to generate a motion image. prognosis using the motion vector, and it can also output index information showing the motion vector for the variable length coding part 13. On the other hand, the compensated motion prognosis part 54 of the moving image decoding device can alternatively perform a motion prediction process compensated in the coding block to generate a prognostic image using the motion vector 5 shown by the index information which is multiplexed in a sequence of bits. Fig. 31 is a block diagram showing a compensated motion prognosis part 5 of a moving image encoding device according to Mode 4 of the present invention. In the figure, because the same reference numerals as those shown in Fig. 2 denote the same or similar components, the explanation of the components will be omitted hereinafter. The direct vector generation part 26 performs a process of referring to a candidate direct vector index in which a selectable motion vector and index information indicating the motion vector are described to select a suitable motion vector for the generating a prognostic image from one or more selectable motion vectors, and outputting the selected motion vector thereby, to the motion compensation processing part 24 as a direct vector and also outputting the index information showing the motion vector for the variable length coding part 13. When encoding compressed data by variable length, an encoding mode, etc., the variable length coding part 13 includes index information in inter-prognosis parameters and then encode these inter-prognosis parameters by variable length. Fig. 32 is a block diagram showing a compensated motion prognosis part 54 of a moving image decoding device according to Mode 4 of the present invention. In the figure, because the same reference numerals as those shown in Fig. 6 denote the same or similar components, the explanation of the components will be omitted hereinafter. The direct vector generation part 65 performs a process of receiving a candidate direct vector index in which a selectable motion vector and 5 index information showing the selectable motion vector are described, read the motion shown by the index information included in the inter-prognosis parameters from the candidate direct vector index, and outputting the motion vector to the motion compensation processing part 63 as a direct vector. In the following, the operation of the moving image encoding device and the operation of the moving image decoding device will be explained. Because the moving image encoding device and moving image encoding device according to this modality have the same structures as those according to Modality 1 mentioned above, with the exception that the direct vector generation parts 23 and 62 of the compensated movement prognosis parts 5 and 54 according to the Modality 1 mentioned above are replaced by the direct vector generation parts 26 and 65, when compared with the Modality 1 mentioned above, only processing performed by each of the generation parts of the vector direct vector 26 and 65 will be explained hereinafter. The direct vector generation part 26 of the compensated motion prognosis part 5 generates a direct vector for each Pin partition of a Bn coding block when the m (Bn) coding mode of the block is a direct mode. More specifically, the direct vector generation part 26 selects a motion vector suitable for generating a prognostic image from one or more selectable motion vectors referring to the candidate direct vector index as shown in Fig. 33. Although five motion vectors are listed as the one or more selectable motion vectors in the example shown in Fig. 33, an index of 0 is assigned to “medium” in a spatial prognosis because “medium” is selected most often in the spatial prognosis. When selecting a motion vector suitable for the generation of a prognostic image, the direct vector generation part 26 calculates a cost R from the prognostic image, which is acquired from each of the selectable motion vectors, distortion of the original image, and the code quantity index of each of the selectable motion vectors, as shown in the following equation (14), and selects the motion vector whose cost R is the lowest of a large number of motion vectors movement. where D is the residual sign between the predicted image and the original image, i and the index, λ is a Lagrange multiplier, and ξ () is the amount of time code within the parentheses. After selecting the motion vector whose cost R is the lowest of the large number of motion vectors, the direct vector generation part 26 issues the motion vector to the motion compensation processing part 24 as a direct vector, and also outputs the index information indicating the motion vector for the variable length coding part 13. For example, when you select “medium” as the motion vector whose cost R is the lowest, the direct vector generation part outputs the index of 0 for the variable length coding part 13, while when selecting "MV_A" as the motion vector whose cost R is the lowest, the direct vector generation part issues an index of 1 for the coding part variable length 13. When receiving the index information from the direct vector generation part 26, the variable length coding part 13 includes the index information in the interprognosis parameters and then encode these inter-prognosis parameters by variable length when encoding compressed data by variable length, the encoding mode, etc. When the coding mode m (Bn) of the coding block 5 Bn is a direct mode, the direct vector generation part 65 of the compensated motion prognosis part 54 generates a direct vector for each Pin partition of the coding block Bn. More specifically, the direct vector generation part 65 receives the same candidate direct vector index (e.g., the candidate direct vector index shown in Fig. 33) as that which the direct vector generation part 26 shown in Fig 31 receives. When receiving inter-prognosis parameters including index information from a selection switch 61, the direct vector generation part 65 reads the motion vector shown by the index information from the candidate direct vector index, and outputs this motion vector to the motion compensation processing part 63 as a direct vector. For example, when the index information is the index of 0, the direct vector generation part emits “medium” as a direct vector, whereas when the index information is the index of 1, the vector generation part direct emits "MV_A" as a direct vector. As can be seen from the description above, because the moving image encoding device according to this modality 4 is built in such a way as to select a motion vector suitable for generating a prognostic image from one or more selectable motion vectors and perform a motion prediction process compensated on a coding block to generate a prognostic image using the motion vector, and also output index information showing the motion vector for the motion coding part variable length 13, an advantage is provided of being able to select an optimum direct mode for each predetermined block unit, and thereby being able to reduce the amount of code, as in the case of Modality 1 mentioned above. Although the explanation is made in this modality 4 assuming the case in which a motion vector exists in a position that can be selected, there is a case in which no motion vector exists when an intra-coding process is performed in the coding block Bn. In this case, a method of setting a zero vector as a motion vector can be considered, a method of not including any motion vector in the candidates, and so on. While the coding efficiency can be improved because candidates increase in number when a zero vector is configured as a motion vector, the amount of information to be processed increases. When no motion vector is included in the candidates for direct vector, the amount of information to be processed can be reduced. Although the example of generating a direct vector is shown in this modality 4, the vector can be used as a predicted vector that is used to encode a normal motion vector. While the amount of information to be processed increases when the direct vector is used as a predicted vector, the coding efficiency can be improved because the accuracy of the prognosis increases. Although candidates for selectable motion vectors are fixed in this mode 4, candidates for selectable motion vectors can alternatively be determined on a per slice basis. Information showing which vectors are to be selected as the candidates is multiplexed in each slice header. For example, there is a method of, because the effect of a time vector is low on a video that is acquired using the camera, removing time vectors from candidates for selection for such a video, and, because the effect of a space vector it is great on a video that is acquired by a fixed camera, adding spatial vectors for candidates to the selection for such a video. The greater the number of candidate vectors is, the closer to the original image a prognostic image can be generated, a balance between the amount of information to be processed and the efficiency of coding can be achieved by determining the candidates taking into account the location of the video. , such as excluding ineffective vectors from candidates in advance, in order to prevent the amount of information being processed from increasing enormously due to the increase in the number of candidate vectors. Switching a vector between a candidate and a non-candidate is achieved using, for example, a method of providing an ON / OFF flag for each vector, and defining only one vector whose flag is set to ON as a candidate. A motion vector that can be a candidate for selection can be switched between a candidate and a non-candidate using each slice header or each header in an upper layer, such as each sequence header or each figure header. In addition, one or more sets of motion vectors each of which may be a candidate for selection can be prepared, and an index showing each of the candidate sets can be coded. In addition, a vector can be switched between a candidate and a non-candidate for each macro block or each coding block. Switching a vector between a candidate and a non-candidate for each macro block or each coding block can provide the localized macro block or coding block, and provides an advantage of improving coding efficiency. Although an index order is fixed in this mode 4, the index order can alternatively be changed on a per slice basis. When the selection of a vector that is carried out on a per slice basis has a polarization, an index table is changed in such a way that a shorter code is assigned to a vector having a higher selection frequency, and thereby provide an improvement in coding efficiency. Information coding showing the change can be accomplished by coding the order of each vector or by preparing a large number of index sets and coding information showing that the index set is used. In addition, it can be considered a method of pre-determining only a default setting, setting a flag showing whether or not to use a different setting from the default setting, and updating the index setting and switching to the setting only when the flag is set. Although the example of changing the order of the indices on a per slice basis is shown above, it goes without saying that the order of the indices can alternatively be determined for each sequence, each figure or the like in an upper layer. As an alternative, the order of the indexes can be changed on a per block block basis or on a per block basis if coded. Changing the order of the indexes on a per macro block basis or on a per coding block basis can provide each macro block or coding block with locality, and can provide an improvement in coding efficiency. In addition, selection candidates can be uniquely determined for each partition block size. Because the spatial correlation in general becomes weak as the partition block size becomes small, the prediction accuracy of a vector determined through an average prognosis is considered to be worse. To solve this problem, by changing the order of the indices that is determined through an average prognosis, an improvement can be provided in the coding efficiency. Although the candidate direct vector indices respectively indicating five selectable pre-prepared motion vectors are shown in this embodiment 4, six or more motion vectors or four or less motion vectors can be prepared as the candidate vectors. For example, such vectors close to a time vector as shown in Fig. 17 and such a vector resulting from a weighted sum of vectors in the vicinity of the coding block as shown in Fig. 18 can be added as candidate vectors. Although a prognostic process from two directions is assumed to be performed in this modality 4, a prognostic process in only one direction can be performed alternatively. When a prognosis of a vector in one direction is performed, information showing which vector is used is encoded and transmitted. As a result, a problem, such as occlusion, can be treated, and a contribution to an improvement in prognostic accuracy can be made. Although it is assumed in this modality 4 that a bi-directional prognosis using two vectors is performed, the number of vectors can be three or more. In this case, for example, index information showing all selected vectors can be encoded. In contrast, index information showing vectors that are not selected can be encoded. As an alternative, a method of encoding only index information showing a single vector can be considered, and using an image close to a reference image shown by the vector, as shown in Fig. 34. Although the example of selecting a motion vector whose cost R is the lowest of a large number of motion vectors is shown in this modality 4, an assessed value SADk can be calculated according to the following equation (15) and a motion vector whose assessed value SADk is equal to or less than a Th limit can be selected. (15) where the index denotes the reference image shown by the vector whose index information is encoded, and gk denotes the reference image shown by an MV_k vector. Although the example of using the evaluated value SADk is shown above, it goes without saying that the evaluation is performed using another method such as SSE. 5 Information showing the number of vectors used can be multiplexed in each header in an upper layer, just like each slice header. While the coding efficiency is improved with an increase in the number of vectors, there is a negotiating relationship between the coding efficiency and the amount of information to be processed because the amount of information to be processed increases with an increase in the number of vectors. As an alternative, the information showing the number of vectors used can be multiplexed not on each slice, but on each smaller unit such as each coding block or each partition. In this case, a balance can be achieved between the amount of information to be processed and the coding efficiency according to the location of the image. Although the example of selecting a motion vector suitable for generating a prognostic image from a large number of selectable motion vectors is shown in this modality 4, a motion vector that is used as a starting vector can be selected from a large number of selectable motion vectors, and after that, a final motion vector can be determined by searching through the vicinity of the initial vector, as in the case of Modality 3 mentioned above. In this case, the direct vector generation part 26 has a structure as shown in Fig. 35. An initial vector generation part 36 shown in Fig. 35 corresponds to the initial vector generation part 34 shown in Fig. 26. Mode 5. Each of the prognostic parts of compensated movement 5 and 54 according to this modality 5 has the functions according to Modality 1 mentioned above (or Modality 2 or 3), and the functions according to Modality 4 mentioned above, can switch between the functions according to the mentioned Mode 1 (or 5 Mode 2 or 3) and the functions according to the mentioned Mode 4 on a per slice basis, and can use both functions according to the mentioned Mode 1 above (or Mode 2 or 3) and the functions according to Mode 4 mentioned above to generate a prognostic image. Fig. 36 is a block diagram showing a compensated motion prognosis part 5 of a moving image encoding device according to Mode 5 of the present invention. In the figure, because the same reference numerals as those shown in Fig. 31 denote the same components or similar components, the explanation of the components will be omitted hereinafter. The direct vector generation part 27 performs a process when, when a direct mode switching flag shows that the index information is not transmitted, generate a direct vector using the same method as that of the direct vector generation part 23 shown in Fig. 2 (or the direct vector generation part 25 shown in Fig. 25) uses, and, when the direct mode switch flag shows what index information is transmitted, generate a direct vector and also output information from index showing the direct vector for the variable length coding part 13 using the same method as that which the direct vector generation part 26 shown in Fig. 31 uses. The direct vector generation part 27 also performs a process of sending the switching flag directly to the variable length coding part 13. Fig. 37 is a block diagram showing the direct vector generation part 27 that builds the prognostic part of compensated movement 5. Referring to Fig 37, a selection switch 91 performs a process of, when the switching flag directly shows that the index information is not transmitted, outputting each Pin partition of a block of Bn encoding for the part corresponding to the generation part of 5 direct vector 23 shown in Fig. 2 (or the direct vector generation part 25 shown in Fig. 25), and, when the direct mode switching flag shows that the index information is transmitted, outputting each Pin partition of the Bn coding block to the part corresponding to the direct vector generation part 26 shown in Fig. 31. Fig. 38 is a block diagram showing the prognosis part motion compensated 54 of a motion picture decoding device according to Mode 5 of the present invention. In the figure, because the same reference numerals as those shown in Fig. 32 denote the same or similar components, the explanation of the components will be omitted hereinafter. The direct vector generation part 66 performs a process when, when the direct mode switching flag included in the inter-prognosis parameters shows that the index information is not transmitted, generate a direct vector using the same method as that which the direct vector generation part 62 shown in Fig. 6 (or the direct vector generation part 64 shown in Fig. 29) uses, and, when the direct mode switching flag shows that the index information is transmitted, generate a direct vector using the same method as that which the direct vector generation part 65 shown in Fig. 32 uses. In the following, the operation of the moving image encoding device and the operation of the moving image decoding device will be explained. The direct vector generation part 27 of the compensated motion prognosis part 5 has the functions of the direct vector generation part 23 shown in Fig. 2 (or the direct vector generation part 25 shown in Fig. 25), and the functions of the direct vector generation part 26 shown in Fig. 31, and, when the direct switching flag entered there from the outside the direct vector generation part shows that the index information is not transmitted, it generates a direct vector using the same method as the one that the direct vector generation part 23 shown in Fig. 2 (or the direct vector generation part 25 shown in Fig. 25) uses, and outputs the direct vector to the motion compensation processing method 24. The direct vector generation part 27 also outputs the direct mode switching flag to the variable length coding part 13. When the direct mode switching flag shows that the index information is transmitted, the generating part direct vector 27 generates a direct vector using the same method as that which the direct vector generation part 65 shown in Fig. 32 uses, and outputs the direct vector to the motion compensation processing part 24. The direct vector generation 27 also emits the direct mode switching flag and the index information for the variable length coding part 13. When receiving the direct mode switching flag coming from the direct vector generation part 27, the variable length coding 13 includes the switching flag directly in the interprognosis parameters and encodes these interprognosis parameters by variable length when encoding compressed encoding data, a coding mode, etc. When receiving the direct mode switching flag and the index information from the direct vector generation part 27, the variable length coding part 13 includes the direct mode switching flag and the index information in the interface parameters. prognosis and variable length coding of these inter-prognosis parameters when variable length coding of compressed data, encoding mode, etc. When receiving decoded interprognosis parameters by a variable length decoding part 51, the direct vector generation part 5 of the compensated motion prognosis part 54 generates a direct vector using the same method as that of the direct vector generation 62 shown in Fig. 6 (or the direct vector generation part 64 shown in Fig. 29) uses when the direct mode switching flag included in the inter-prognosis parameters shows that the index information is not transmitted. Conversely, when the direct mode switching flag shows that the index information is transmitted, the direct vector generation part generates a direct vector using the same method as that of the direct vector generation part 65 shown in Fig. 32 uses. In general, additional information increases in a way in which the index information is transmitted when compared to a mode in which the index information is not transmitted. Therefore, when the percentage of additional information in the total code amount is large, such as when the baud rate is low, the performance in a mode in which the index information is not transmitted is greater than that in a mode in the which the index information is transmitted. Conversely, when the percentage of additional information in the total code amount is small, such as when the transmission rate is high, it is expected that the coding efficiency is further improved by adding the index information and using an optional direct vector. Although the example in which the direct mode switching flag is included in the inter-prognosis parameters is shown in this mode 5, the direct mode switching flag can be multiplexed in each slice header, each figure, or each sequence header . In addition, a method of determining switching according to a partition size can be considered. In general, the percentage of additional information, such as a motion vector, becomes relatively small with an increase in partition size. Therefore, it can be considered a structure to select a mode in which index information is transmitted when a partition size is equal to or greater than a given size, and when a partition size is less than the given size, select a mode in which the index information is not transmitted. When using the method of determining switching according to a partition size, as mentioned above, a flag showing which mode is used for each encoding block size can be multiplexed in each header in an upper layer, such as each header of slice. Although the example of switching between the functions according to Mode 1 mentioned above and the functions according to Mode 4 mentioned above according to the direct mode switching flag is shown in this mode 4, switching between the functions according to Mode 2 mentioned above and the functions according to Mode 4 mentioned above or switching between the functions according to Mode 3 mentioned above and the functions according to Mode 4 mentioned can alternatively be performed. As an alternative, switching between the functions according to the above mentioned Mode 1 and the functions according to the above mentioned Mode 2, switching between the functions according to the above mentioned Mode 1 and the functions according to the above mentioned Mode 3 , or switching between the functions according to the Mode 2 mentioned above and the functions according to the Mode 3 mentioned above can be performed. As an alternative, arbitrary functions can be selected from among the functions according to Modality 1 to 4 mentioned above. Although the example of switching between functions according to Mode 1 mentioned above and the functions according to Mode 4 mentioned above according to the direct mode switching flag is shown in this mode 5, an ON / OFF flag can be provided instead of switching between the functions according to Mode 1 5 mentioned above and the functions according to Mode 4 mentioned above according to the direct mode switching flag. For example, a method of providing an ON / OFF flag showing whether or not to use Mode 1 can be considered, and, when the flag is set, performing both Mode 1 and Mode 4 to select a mode that provides greater degree of coding efficiency from modes and encodes information. This method provides an advantage of being able to switch between direct modes according to the location of the image and makes a contribution to an improvement in coding efficiency. Although the flag to activate or deactivate Mode 1 is provided in the example mentioned above, a flag to activate or deactivate Mode 4 may alternatively be provided. As an alternative, Modalities 2 and 4 or Modalities 3 and 4 can be combined. Although the example of selecting a motion vector suitable for generating a prognostic image from a large number of selectable motion vectors is shown in this modality 5, a motion vector that is used as a starting vector can be selected from among a large number of selectable motion vectors, and after that, a final motion vector can be determined by searching through the vicinity of the initial vector, as in the case of Modality 3 mentioned above. In this case, the direct vector generation part 27 has a structure as shown in Fig. 39. An initial vector generation part 37 shown in Fig. 39 corresponds to the initial vector generation part 34 shown in Fig. 26. While the invention has been described in its preferred modalities, it is to be understood that an arbitrary combination of two or more of the modalities mentioned above can be made, several changes can be made to an arbitrary component according to any one of the modalities mentioned above, and an arbitrary component according to any of the modalities mentioned above can be omitted within the scope of the invention. Although it is described above that, for example, a maximum size is determined and an upper limit of hierarchical number on the number of hierarchical layers in a hierarchy in which each of the coding blocks having the maximum size is hierarchically divided into blocks is also determined, and a coding mode that is suitable for each of the coding blocks in which each coding block having the maximum size is divided hierarchically is selected from one or more available coding modes, any or all, of the maximum size, of the upper limit of the hierarchical number, and the encoding mode can alternatively be determined in advance. Mode 6. Although the example in which the direct vector generation part 26 of the compensated motion prognosis part 5 in the moving image encoding device obtains one or more selectable motion vectors referring to the candidate direct vector index as shown in Fig. 33 is shown in Mode 4 mentioned above, the coding control part 1 can alternatively generate a list of one or more motion vectors selectable according to the block size of a coding block, and refer to the vector lists candidate candidate showing the one or more selectable motion vectors and the candidate candidate vector index to determine a vector directly. Concretely, a coding control part according to this modality operates in the following way. As mentioned above, while one or more selectable motion vectors can be uniquely determined, for each block size for partition, for example, there is a high correlation between the partition which is the coding block and an adjacent block when the partition has a large block size, whereas there is a low correlation between the partition which is the coding block and an adjacent block when the partition has a small block size, as shown in Fig. 40, therefore , the number of candidates for the one or more selectable motion vectors can be reduced by decreasing the block size of a partition. For this purpose, the coding control part 1 lists one or more selectable motion vectors in advance for each of the available block sizes for the partition that is the coding block, as shown in Fig. 41. As can be seen from starting from Fig. 41, the coding control part reduces the number of candidates for the one or more selectable motion vectors with the decrease in the block size of a partition. For example, while the number of selectable motion vectors is "4" for a partition whose block size is "64," the number of selectable motion vectors is "2" for a partition whose block size is "8." "Medium", "MV_A", "MV_B", "MV_C", and "temporal" shown in Fig. 42 correspond to "medium", "MV_A", "MV_B", "MV_C", and "temporal" shown in Fig 33, respectively. When determining one or more selectable motion vectors, the coding control part 1 refers to, for example, the list shown in Fig. 41, specifies the one or more motion vectors corresponding to the block size of a partition which is the target to be coded, and output the candidate direct vector list showing the one or more motion vectors for the compensated motion prognosis part 5. For example, when a partition's block size is "64," the coding control part determines "MV_A", "MV_B", "MV_C", and "temporal" as the one or more selectable motion vectors . In addition, when a partition's block size is "8", the coding control part 5 determines "medium" and "temporal" as the one or more selectable motion vectors. When receiving the candidate direct vector list from coding control part 1, a direct vector generation part 26 of the compensated motion prognosis part 5 selects a motion vector suitable for generating a prognostic image from of one or more motion vectors shown by the candidate direct vector list, such as the one under Mode 4 mentioned above. In this case, because the number of candidates for one or more selectable motion vectors is small when the block size of a partition is small, the number of calculations for an evaluated SADk value as shown in equation (15) mentioned above, and so on. onwards it is reduced and the processing load on a compensated movement prognosis part 5 is reduced, for example. In the case where the encoding control part 1 of the moving image encoding device determines one or more motion vectors selectable in this way, a moving image decoding device also needs to have a list of one or more direct candidate vectors selectable that are completely the same as those on the moving image encoding device. When the coding mode m (Bn) is a direct mode, for each Pin partition of the coding block Bn, the variable length decoding part 51 of the moving image decoding device outputs the block size of a partition to the compensated motion prognosis part 54, and also outputs the index information that the variable-length decoding part acquires by decoding the bit sequence by variable length (ie, the information showing the motion vector that is used by the prognosis part of compensated motion 5 of the moving image encoding device 5) for the compensated motion prognosis part 54. When receiving the partition block size from the variable length decoding part 51, the direct vector generation part 65 of the compensated motion prognosis part 54 receives the direct vector index and outputs the motion vector that is used for a direct mode of the list of one or more candidate motion vectors that is predetermined according to the block size, such as that according to the Modality 4 mentioned above. More specifically, the direct vector generation part 65 lists one or more selectable motion vectors for each of the block sizes available for the partition in advance (refer to Fig. 41), and, when determining one or more selectable motion vectors , refers to the list shown in Fig. 41 and the direct vector index, and outputs the one or more motion vectors corresponding to the block size of a partition that is to be decoded this time. For example, in a case in which the block size of a partition is "8", the direct vector generation part outputs "medium" as a direct vector when the index information is an index of 0, and outputs "temporal "as a direct vector when the index information is an index of 1. As can be seen from the description above, because the coding control part according to this modality 6 is built in such a way that to determine one or more motion vectors selectable according to the block size of a partition which is the coding block, a motion vector other than motion vectors suitable for generating a prognostic image can be removed from the candidates for the partition having a low correlation between the partition and adjacent blocks. Therefore, an advantage is provided of being able to reduce the amount of information to be processed. 5 Furthermore, because the coding control part according to this modality 6 is constructed in such a way as to, when determining one or more selectable motion vectors, reduce the number of candidates for one or more selectable motion vectors with the decrease in the block size of a partition, a motion vector other than motion vectors suitable for generating a prognostic image can be removed from the candidates. Therefore, an advantage is provided of being able to reduce the amount of information to be processed. Although the example in which the block size of a partition that is the encryption block has a maximum of "64" is shown in this mode 6, the block size may alternatively have a maximum greater than 64 or less than 64. Fig. 42 shows an example of a list whose maximum block size is "128." Although the maximum block size for each of the list maintained by the coding control part 1 and the motion compensated prognosis part 54 is "128" in the example in Fig. 42, the portion in which the block sizes are equal to or less than "32" in the list mentioned above need only be referred to when the maximum block size of the current partition is "32." In addition, although the example of determining one or more selectable motion vectors according to the block size of a partition that is the coding block is shown in this embodiment 6, one or more selectable motion vectors can alternatively be determined accordingly. with the division pattern of the coding block, and the same advantages can be provided. Fig. 43 is an explanatory drawing of a list showing one or more selectable motion vectors that are determined for each of the division patterns available for the coding block. For example, while "MV_A", "MV_B", "MV_C", and "temporal" are determined as one or more selectable motion vectors 5 when the partition that is the coding block is 2partH1, there is a high possibility that when the partition that is the coding block is 2partH2, its movement differs from that of 2partH1 which is the block located to the left of 2partH2. Therefore, "MV_A" which is the motion vector of the block located to the left of 2partH2 is removed from the one or more selectable motion vectors for 2partH2, and "MV_B", "MV_C", and "temporal" are determined as the one or more motion vectors selectable for 2partH2. In addition, although a vector in time direction is used in this mode 6, the data size of the vector when stored in memory can be compressed in order to reduce the amount of memory used to store the vector. For example, when the minimum block size is 4 × 4, although a vector in a temporal direction is typically stored for each block having a size of 4 × 4, it is considered a method of storing a vector in a temporal direction for each block having a bigger size. A problem with the aforementioned method of storing a vector in a temporal direction while compressing the vector data size is that when performing processing in units of a block having a smaller block size than the unit for storing the vector data tablets, the position to be referenced does not indicate a correct position. To solve this problem, a process of not using any vector in a temporal direction at a time when the block is smaller than the unit to store the compressed vector data can be performed. By removing a vector having a small degree of accuracy from the candidates, an advantage is provided in reducing the amount of information to be processed and the amount of index code. In addition, although the direct vector mode is described in this modality 6, the same method can be used to determine a predicted vector that is used for normal motion vector encoding. Using this method, an advantage is provided of providing both a reduction in the amount of information to be processed and an improvement in the coding efficiency. In addition, this modality 6 is constructed in such a way that when ref_Idx of a direct vector or a vector that is desired to be predicted differs from ref_Idx of any of a large number of candidate vectors that are used for the generation of the direct vector or the determination of the predicted vector (the figure that is the reference destination of the direct vector or the vector to be prognostic differs from that of any candidate vector), a process of dimensioning according to the distance in a temporal direction is performed in each of the vectors candidates, as shown in Fig. 14. When ref_Idx of the direct vector or the vector that is desired to be predicted is the same as ref_Idx of one of a large number of candidate vectors, the process of scaling according to distance in time is not accomplished. (16) where scaled_MV denotes a scaled vector, MV denotes a motion vector yet to be scaled, and d (x) denotes a time distance to x. In addition, Xr denotes the reference image shown by the coding block, and Yr denotes the reference image shown by each of the positions in block A to D which are the targets for scaling. In addition, this modality is constructed in such a way that a block that is inter-coded is searched from the target blocks, and all vectors included in the block are used with candidate spatial vectors, as shown in Fig. 49. There may be a case in which the reference figure that is to be indicated by the direct vector or by the vector that is desired to be predicted is the same as that indicated by one of these candidate vectors, and a case in which the reference figure that is to be indicated by the direct vector or by the vector that is desired to be predicted differs from that indicated by any of these candidate vectors, as mentioned above. In the previous case, this modality can be constructed in such a way that only candidate vectors indicating the same reference figure are used as candidates. In the first case, this modality can be constructed in such a way that a correction process to effect a scaling process to make one of the candidate vectors indicate the same reference figure is carried out. The first case provides an advantage of removing a vector having a low degree of accuracy from the candidates without increasing the amount of information to be processed. The latter case provides an advantage of reducing the amount of code because the amount of information to be processed increases due to research, but the number of candidates for selection can be increased. In addition, in a case of scaling as shown in equation (16), a candidate vector whose ref_Idx differs from ref_Idx of the direct vector or the vector that is desired to be predicted can be scaled at a time to find a block that is inter- coded (a candidate vector whose ref_Idx is the same as ref_Idx of the direct vector or the vector that is desired to be predicted is not scaled), or scaling can be performed only when there is no candidate vector whose ref_Idx is the same as the ref_Idx of the vector or vector that is desired to be predicted after all blocks are searched. Because a vector having an improved degree of accuracy can be added to candidates as the amount of information to be processed increases, an advantage of reducing the amount of code is provided. Mode 7. Although the example in which the coding control part 1 of the moving image coding device maintains a list showing the selectable motion vectors and the compensated motion 5 prognosis part 54 of the moving image decoding device also maintains a list showing the selectable motion vectors is shown in Mode 6 mentioned above, the variable length encoding part 13 of the moving image encoding device can encode variable information list showing a melitplexed encoded list and multiplex over list information on, for example, each slice header, and transmit the encoded data to a moving image decoding device. In this case, the variable length decoding part 51 of the moving image decoding device decodes the encoded data that is multiplexed in each slice header by variable length to acquire the list information, and outputs the list shown by the list information for the direct vector generation part 65 of the compensated motion prognosis part 54. The moving image encoding device can transmit the list information showing the list to the moving image decoding device on a per slice basis ( or on a per-sequence basis, on a per-figure basis, or the like) in this manner. As an alternative, only when the list currently being maintained by the scrambling control part 1 is changed, the moving image coding device can transmit the list information showing the changed list to the moving image decoding device. From now on, processes will be explained concretely. Fig. 44 is a flow chart showing a transmission process for transmitting list information that is performed by a moving image encoding device in accordance with this modality, and Fig. 45 is a flow chart showing a reception process for receiving information list that is performed by a moving image decoding device according to this modality. 5 While an encoding control part 1 of the moving image encoding device, it determines one or more selectable motion vectors according to the block size of a partition which is an encoding block, with that according to the Modality 6 mentioned above, the coding control part 1 checks to see if the list to which the coding control part refers when determining one or more motion vectors is changed, and when the list is the same as the previous list ( step ST41 of Fig. 44), sets a change flag to "OFF" in order to notify the moving image decoding device that the list is the same as the previous list (step ST42). When the coding control part 1 sets the change flag to "OFF", a variable length coding part 13 encodes the change flag set to "OFF" and transmits encoded data from the change flag to the decoding device of moving image (step ST43). Conversely, when the list differs from the previous list (step ST41), the coding control part 1 sets the shift flag to "ON" in order to notify the moving image decoding device that the list differs from the previous list (step ST44). When the coding control part 1 sets the change flag to "ON", the variable length coding part 13 encodes the change flag set to "ON" and the list information showing the changed list, and transmits encrypted data of the change flag and the list information for the moving image decoding device (step ST45). Fig. 46 shows an example in which the change flag set to "ON" and the list information showing the changed list are coded because "temporal" in the list is changed from being selectable to not being selectable. The variable length decoding part 51 of the moving image decoding device decodes the encoded data to acquire the change flag (step ST51 of Fig. 45), and when the change flag is set to “OFF” ( step ST52), emits the change flag set to "OFF" for the compensated motion prognosis part 54. When receiving the change flag set to "OFF" from the variable length decoding part 51, the prognosis part of compensated movement 54 recognizes that the list is the same as the previous list and configures the list currently being maintained as a candidate for reference (step ST53). Therefore, the compensated movement prognosis part 54 determines one or more movement vectors corresponding the block size of a partition that is to be decoded at this point referring to the list currently being maintained. Conversely, when the shift flag is set to “ON” (step ST52), the variable length decoding part 51 of the moving image decoding device decodes the encoded data to acquire the list information and outputs the change set to "ON" and the list information for the compensated motion prognosis part 54 (step ST54). When receiving the change flag set to "ON" and the list information coming from the variable length decoding part 51 , the compensated movement prognosis part 54 recognizes that the list differs from the previous list, changes the list currently being maintained according to the list information, and configures the list thus changed as a candidate for reference (step ST55). , the compensated motion prognosis part 54 determines one or more motion vectors corresponding to the block size of a partition that is to be decoded at this point referring to the list thus changed. Fig. 47 shows an example in which the list currently being maintained is changed 5 because the change flag is set to "ON". As can be seen from the description above, because the moving image encoding device accordingly with this mode 7 it is built in such a way as to, only when a list showing one or more selectable motion vectors is changed, encoding the list information showing the list changed to generate encoded data, an advantage of being able to install a function to accept a change of smooth without causing a big increase in the amount of code. Although the example of, even when the part of the one or more selectable motion vectors shown by the list is changed, encoding the list information showing the whole of the changed list is shown in this mode 7, a change flag can be prepared for each size block, the change flag prepared for a block size for which one or more selectable motion vectors are changed can be set to "ON", and only the list information associated with the block size can be coded, as shown in Fig. 48. Because the motion vectors in a case of a block size of "64" and the motion vectors in a case of a block size of "8" are not changed in the example shown in Fig. 48 , their change flags are set to "OFF" and the list information associated with each of the block sizes is not coded. In contrast, because the motion vectors in a case with a block size of "32" and the motion vectors in a case with a block size of "16" are changed in the example, their change flags are set to " ON "and the list information associated with each of the block sizes is encoded. While the change flag for one of the block sizes is set to "ON," the change flag prepared for each block size can be coded, and when the change flag of any block size is set to "OFF" ", only the list change flag (change flag set to" OFF ") can be coded. As an alternative, instead of using the change flag for each list, only the change flag prepared for each block size can be coded. Although the example of being able to change the selectable motion vectors for each block size is shown, the selectable motion vectors can be changed for each coding block division pattern. INDUSTRIAL APPLICABILITY Because the moving image encoding device, the moving image decoding device, the moving image encoding method, and the moving image decoding method according to the present invention makes it possible to select a mode optimum direct for each pre-determined block unit and reduces the amount of code, they are suitable for use as a moving image encoding device, a moving image decoding device, a moving image encoding method, and a moving image decoding method that are used for image compression encoding technology, compressed tran image data transmission technology, etc., respectively. EXPLANATIONS OF REFERENCE NUMBERS 1 - coding control part (coding control unit), 2 - block dividing part (block dividing unit), 3 - selection switch (intra-prognosis unit and compensated movement prognosis unit), 4 - intra-prognostic part (intra-prognostic unit), 5 - compensated movement prognosis unit (compensated movement prognosis unit), 6 - subtraction part (unit of 5 difference image generation), 7 - transformation / quantization part (image compression unit), 8 - reverse quantization / reverse transformation part, 9 - addition part, 10 - memory for intraprognosis, 11 - loop filtration part, 12 - compensated motion prognosis frame memory, 13 - variable length coding part (variable length coding unit), 21 - selection switch, 22 - part motion vector search, 23 - direct vector generation part, 24 - motion compensation processing part, 25, 26, and 27 - direct vector generation part, 31 - spatial direct vector generation part, 32 - part time direct vector generation, 33 - direct vector determination part, 34, 36, and 37 - initial vector generation part, 35 - motion vector research part, 35 - motion compensation part, 42 - similarity calculation part, 43 - direct vector selection part, 31 - variable length decoding part (variable length decoding unit), 52 - selection switch (intra-prognosis unit and compensated motion prognosis unit ), 53 - part of intra-prognosis (unit of intra-prognosis), 54 - part of prognosis of compensated movement (prognosis unit of compensated movement), 55 - part of reverse quantization / inverse transformation (unit of image generation of difference), 56 - addition part (decoded imaging unit), 57 - intraprognosis memory, 11 - loop filtration part, 12 - compensated motion prognosis frame memory, 61 - selection switch, 62 - direct vector generation part, 63 - motion compensation processing part, 64, 65, and 66 - direct vector generation part, 71 - space vector generation part, 72 - time vector generation part, 73 - initial vector determination part, 35 - motion compensation part, 82 - similarity calculation part, 83 - part determination determination initial vector, 91 - selection switch.
权利要求:
Claims (2) [1] 1. An image decoding device, characterized by the fact that it comprises: a variable length decoding unit to carry out variable length decoding process of multiplexed encoded data in a sequence of bits to acquire compressed data, encoding mode and information of index, each associated with a coding block; a compensated motion prediction unit to perform a compensated motion prediction process on the mentioned coding block based on the mentioned coding mode to generate a prognostic image using a motion vector selected from one or more selectable motion vectors , the mentioned compensated movement prognostic unit by selecting the mentioned movement vector indicated by the mentioned information index; a decoded imaging unit for generating decoded image data by adding a difference image that is obtained by encoding the aforementioned compressed data and the mentioned prognosis image generated by the mentioned compensated motion prognosis unit; wherein the mentioned compensated motion prediction unit selects a spatial motion vector that is obtained from a decoded block located around the mentioned coding block or a temporal motion vector, which is obtained from a decoded figure that can be referenced by the mentioned coding block according to the mentioned information index. [2] 2. Image decoding method characterized by the fact that it comprises: a step to carry out a variable length decoding process on encoded data multiplexed in a sequence of bits to acquire compressed data, encoding mode and information index, each associated with an encoding block; a step to carry out a motion prediction process compensated on the mentioned coding block based on the mentioned coding mode to generate a prognostic image using a motion vector selected from one or more selectable motion vectors, the motion vector mentioned is selected according to the mentioned information index; a step for generating decoded image data by adding a difference image in which it is obtained by decoding the mentioned compressed data and the mentioned prognostic image; wherein the candidate motion vector mentioned includes at least one spatial motion vector that is obtained from a decoded block located around the mentioned coding block or a temporal motion vector, which is obtained from a decoded image that can be referred to by the mentioned coding block.
类似技术:
公开号 | 公开日 | 专利标题 BR112013006499A2|2020-08-04|image decoding devices, and, image decoding methods CN107431820A|2017-12-01|Motion vector derives in video coding KR20080068678A|2008-07-23|Dynamic image encoding device and dynamic image decoding device BR112013017208B1|2019-01-29|PREDICTIVE CODING METHOD AND MOVEMENT PREDICTIVE CODING DEVICE, AND PREDICTIVE DECODING METHOD AND PREDICTIVE MOVEMENT DECODING DEVICE BR112019027261A2|2020-07-14|motion vector refinement for multi-reference prediction CN111201795A|2020-05-26|Memory access window and padding for motion vector modification CN111698500B|2022-03-01|Encoding and decoding method, device and equipment CN112292861A|2021-01-29|Sub-pixel accurate correction method based on error surface for decoding end motion vector correction JP5206772B2|2013-06-12|Moving picture coding apparatus, moving picture coding method, and moving picture coding program JP2012186762A|2012-09-27|Video encoding device, video decoding device, video encoding method, and video decoding method JP5206773B2|2013-06-12|Moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program TW202205852A|2022-02-01|Encoding and decoding method, apparatus and device thereof CN111064964A|2020-04-24|Encoding and decoding method, device and equipment BR112013014258B1|2021-11-23|IMAGE ENCODING DEVICE, E, IMAGE ENCODING METHOD JP2012080210A|2012-04-19|Moving image encoder, moving image decoder, moving image encoding method, and moving image decoding method JP2011254395A|2011-12-15|Moving image encoding apparatus, moving image encoding method and moving image encoding program JP2011254396A|2011-12-15|Moving image decoding apparatus, moving image decoding method and moving image decoding program
同族专利:
公开号 | 公开日 RU2654136C2|2018-05-16| KR101914018B1|2018-10-31| CN106713931A|2017-05-24| MX2013003694A|2013-04-24| KR20170037675A|2017-04-04| KR20180017217A|2018-02-20| JP6768017B2|2020-10-14| CA2813232C|2020-02-04| TW201216717A|2012-04-16| SG10201707379SA|2017-10-30| JP2018067974A|2018-04-26| JP2017060197A|2017-03-23| CN106454379B|2019-06-28| CN103222265B|2017-02-08| RU2549512C2|2015-04-27| RU2680199C1|2019-02-18| CN106713930B|2019-09-03| TWI571108B|2017-02-11| TWI581622B|2017-05-01| EP2624562A1|2013-08-07| RU2015108787A|2015-08-10| CN106488249A|2017-03-08| JP6071922B2|2017-02-01| CA2813232A1|2012-04-05| SG189114A1|2013-05-31| CN106713930A|2017-05-24| TW201640898A|2016-11-16| KR20130076879A|2013-07-08| RU2706179C1|2019-11-14| KR101723282B1|2017-04-04| CN106454379A|2017-02-22| JP6768110B2|2020-10-14| KR102013093B1|2019-08-21| WO2012042719A1|2012-04-05| US20130177076A1|2013-07-11| KR101554792B1|2015-09-21| CN103222265A|2013-07-24| US20150245035A1|2015-08-27| RU2013119920A|2014-11-10| US9900611B2|2018-02-20| KR101829594B1|2018-02-14| CA3033984A1|2012-04-05| CA2991166C|2019-04-09| CN106713931B|2019-09-03| US9894376B2|2018-02-13| RU2016132603A|2018-02-14| RU2597499C2|2016-09-10| KR20150010774A|2015-01-28| US20150245057A1|2015-08-27| JP2014112939A|2014-06-19| US20150245032A1|2015-08-27| US20150281725A1|2015-10-01| KR20180118814A|2018-10-31| CN106488249B|2019-11-05| JP6312787B2|2018-04-18| JP5486091B2|2014-05-07| JPWO2012042719A1|2014-02-03| EP2624562A4|2014-11-19| SG10201802064VA|2018-05-30| US9894375B2|2018-02-13| SG10201701439WA|2017-03-30| JP2019146245A|2019-08-29| JP2020202590A|2020-12-17| CA2991166A1|2012-04-05| US9369730B2|2016-06-14| US9900612B2|2018-02-20| SG10201506682SA|2015-10-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP3351645B2|1995-01-31|2002-12-03|松下電器産業株式会社|Video coding method| JP3628810B2|1996-06-28|2005-03-16|三菱電機株式会社|Image encoding device| FR2756399B1|1996-11-28|1999-06-25|Thomson Multimedia Sa|VIDEO COMPRESSION METHOD AND DEVICE FOR SYNTHESIS IMAGES| CN100518319C|1996-12-18|2009-07-22|汤姆森消费电子有限公司|Fixed-length block data compression and decompression method| JP4114859B2|2002-01-09|2008-07-09|松下電器産業株式会社|Motion vector encoding method and motion vector decoding method| JP2004088722A|2002-03-04|2004-03-18|Matsushita Electric Ind Co Ltd|Motion picture encoding method and motion picture decoding method| AU2003279015A1|2002-09-27|2004-04-19|Videosoft, Inc.|Real-time video coding/decoding| KR100506864B1|2002-10-04|2005-08-05|엘지전자 주식회사|Method of determining motion vector| KR100990829B1|2002-11-01|2010-10-29|파나소닉 주식회사|Motion picture encoding method and motion picture decoding method| JP2005005844A|2003-06-10|2005-01-06|Hitachi Ltd|Computation apparatus and coding processing program| US7688894B2|2003-09-07|2010-03-30|Microsoft Corporation|Scan patterns for interlaced video content| US7567617B2|2003-09-07|2009-07-28|Microsoft Corporation|Predicting motion vectors for fields of forward-predicted interlaced video frames| US7724827B2|2003-09-07|2010-05-25|Microsoft Corporation|Multi-layer run level encoding and decoding| EP1835747B1|2005-01-07|2019-05-08|Nippon Telegraph And Telephone Corporation|Video encoding method and device, video decoding method and device, program thereof, and recording medium containing the program| JP2007221202A|2006-02-14|2007-08-30|Victor Co Of Japan Ltd|Moving picture encoder and moving picture encoding program| JPWO2007136088A1|2006-05-24|2009-10-01|パナソニック株式会社|Image encoding apparatus, image encoding method, and integrated circuit for image encoding| CN101507280B|2006-08-25|2012-12-26|汤姆逊许可公司|Methods and apparatus for reduced resolution partitioning| JP5025286B2|2007-02-28|2012-09-12|シャープ株式会社|Encoding device and decoding device| BRPI0808679A2|2007-03-29|2014-09-02|Sharp Kk|VIDEO IMAGE TRANSMISSION DEVICE, VIDEO IMAGE RECEPTION DEVICE, VIDEO IMAGE RECORDING DEVICE, VIDEO IMAGE PLAYBACK DEVICE AND VIDEO IMAGE DISPLAY DEVICE| BRPI0809512A2|2007-04-12|2016-03-15|Thomson Licensing|context-dependent merge method and apparatus for direct jump modes for video encoding and decoding| JP2008283490A|2007-05-10|2008-11-20|Ntt Docomo Inc|Moving image encoding device, method and program, and moving image decoding device, method and program| JP2008311781A|2007-06-12|2008-12-25|Ntt Docomo Inc|Motion picture encoder, motion picture decoder, motion picture encoding method, motion picture decoding method, motion picture encoding program and motion picture decoding program| EP2200323A4|2007-09-25|2012-03-14|Sharp Kk|Moving image encoder and moving image decoder| BRPI0818649A2|2007-10-16|2015-04-07|Thomson Licensing|Methods and apparatus for encoding and decoding video in geometrically partitioned superblocks.| CN101884219B|2007-10-16|2014-08-20|Lg电子株式会社|A method and an apparatus for processing a video signal| JP2009147807A|2007-12-17|2009-07-02|Fujifilm Corp|Image processing apparatus| KR101505195B1|2008-02-20|2015-03-24|삼성전자주식회사|Method for direct mode encoding and decoding| EP2266318B1|2008-03-19|2020-04-22|Nokia Technologies Oy|Combined motion vector and reference index prediction for video coding| JPWO2009128208A1|2008-04-16|2011-08-04|株式会社日立製作所|Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, and moving picture decoding method| JP4937224B2|2008-09-30|2012-05-23|株式会社東芝|Image encoding device| US8483285B2|2008-10-03|2013-07-09|Qualcomm Incorporated|Video coding using transforms bigger than 4×4 and 8×8| US20100166073A1|2008-12-31|2010-07-01|Advanced Micro Devices, Inc.|Multiple-Candidate Motion Estimation With Advanced Spatial Filtering of Differential Motion Vectors| TWI405469B|2009-02-20|2013-08-11|Sony Corp|Image processing apparatus and method| EP2224738A1|2009-02-27|2010-09-01|Nxp B.V.|Identifying occlusions| US8391365B2|2009-03-20|2013-03-05|National Cheng Kung University|Motion estimator and a motion estimation method| US9060176B2|2009-10-01|2015-06-16|Ntt Docomo, Inc.|Motion vector prediction in video coding| WO2011099242A1|2010-02-12|2011-08-18|三菱電機株式会社|Image encoding device, image decoding device, image encoding method, and image decoding method|JP3383236B2|1998-12-01|2003-03-04|株式会社日立製作所|Etching end point determining method and etching end point determining apparatus| CN106210737B|2010-10-06|2019-05-21|株式会社Ntt都科摩|Image prediction/decoding device, image prediction decoding method| JP5807588B2|2011-03-08|2015-11-10|株式会社Jvcケンウッド|Moving picture encoding apparatus, moving picture encoding method, moving picture encoding program, transmission apparatus, transmission method, and transmission program| JP5682582B2|2011-03-08|2015-03-11|株式会社Jvcケンウッド|Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program| WO2012174423A1|2011-06-17|2012-12-20|President And Fellows Of Harvard College|Stabilized polypeptides as regulators of rab gtpase function| KR102030205B1|2012-01-20|2019-10-08|선 페이턴트 트러스트|Methods and apparatuses for encoding and decoding video using temporal motion vector prediction| EP2811743B1|2012-02-03|2021-03-03|Sun Patent Trust|Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device| WO2013132792A1|2012-03-06|2013-09-12|パナソニック株式会社|Method for coding video, method for decoding video, device for coding video, device for decoding video, and device for coding/decoding video| WO2014073173A1|2012-11-06|2014-05-15|日本電気株式会社|Video encoding method, video encoding device, and video encoding program| KR102088383B1|2013-03-15|2020-03-12|삼성전자주식회사|Method and apparatus for encoding and decoding video| US9442637B1|2013-06-17|2016-09-13|Xdroid Kft|Hierarchical navigation and visualization system| US9693076B2|2014-01-07|2017-06-27|Samsung Electronics Co., Ltd.|Video encoding and decoding methods based on scale and angle variation information, and video encoding and decoding apparatuses for performing the methods| JP6187286B2|2014-01-28|2017-08-30|富士通株式会社|Video encoding device| GB201405649D0|2014-03-28|2014-05-14|Sony Corp|Data encoding and decoding| WO2015163167A1|2014-04-23|2015-10-29|ソニー株式会社|Image-processing device, and image-processing method| US10283091B2|2014-10-13|2019-05-07|Microsoft Technology Licensing, Llc|Buffer optimization| KR102288949B1|2015-01-22|2021-08-12|현대모비스 주식회사|Brake system for an automobile| US10694202B2|2016-12-01|2020-06-23|Qualcomm Incorporated|Indication of bilateral filter usage in video coding| US20180242024A1|2017-02-21|2018-08-23|Mediatek Inc.|Methods and Apparatuses of Candidate Set Determination for Quad-tree Plus Binary-tree Splitting Blocks| WO2019049912A1|2017-09-08|2019-03-14|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ|Coding device, decoding device, coding method, and decoding method| KR102285739B1|2017-11-09|2021-08-04|삼성전자주식회사|Apparatus and method for encoding image based on motion vector resolution, and apparatus and method for decoding image| WO2020005002A1|2018-06-28|2020-01-02|엘지전자 주식회사|Method and device for deriving template area according to inter-prediction in image coding system| TWI725463B|2018-07-01|2021-04-21|大陸商北京字節跳動網絡技術有限公司|Spatial motion compression| US10735763B2|2018-07-27|2020-08-04|Tencent America LLC|Method and apparatus for motion vector prediction using spatial and temporal combination| US11190792B2|2020-01-09|2021-11-30|Telefonaktiebolaget Lm Ericsson |Picture header presence|
法律状态:
2020-08-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-08-18| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 7/32 Ipc: H04N 19/109 (2014.01), H04N 19/513 (2014.01), H04N | 2020-08-25| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2022-02-01| B07A| Application suspended after technical examination (opinion) [chapter 7.1 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JP2010-221460|2010-09-30| JP2010221460|2010-09-30| JP2011050214|2011-03-08| JP2011-050214|2011-03-08| PCT/JP2011/004121|WO2012042719A1|2010-09-30|2011-07-21|Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method, and dynamic image decoding method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|